Mind the gap: Offline policy optimization for imperfect rewards

J Li, X Hu, H Xu, J Liu, X Zhan, QS Jia… - arXiv preprint arXiv …, 2023 - arxiv.org
J Li, X Hu, H Xu, J Liu, X Zhan, QS Jia, YQ Zhang
arXiv preprint arXiv:2302.01667, 2023arxiv.org
Reward function is essential in reinforcement learning (RL), serving as the guiding signal to
incentivize agents to solve given tasks, however, is also notoriously difficult to design. In
many cases, only imperfect rewards are available, which inflicts substantial performance
loss for RL agents. In this study, we propose a unified offline policy optimization
approach,\textit {RGM (Reward Gap Minimization)}, which can smartly handle diverse types
of imperfect rewards. RGM is formulated as a bi-level optimization problem: the upper layer …
Reward function is essential in reinforcement learning (RL), serving as the guiding signal to incentivize agents to solve given tasks, however, is also notoriously difficult to design. In many cases, only imperfect rewards are available, which inflicts substantial performance loss for RL agents. In this study, we propose a unified offline policy optimization approach, \textit{RGM (Reward Gap Minimization)}, which can smartly handle diverse types of imperfect rewards. RGM is formulated as a bi-level optimization problem: the upper layer optimizes a reward correction term that performs visitation distribution matching w.r.t. some expert data; the lower layer solves a pessimistic RL problem with the corrected rewards. By exploiting the duality of the lower layer, we derive a tractable algorithm that enables sampled-based learning without any online interactions. Comprehensive experiments demonstrate that RGM achieves superior performance to existing methods under diverse settings of imperfect rewards. Further, RGM can effectively correct wrong or inconsistent rewards against expert preference and retrieve useful information from biased rewards.
arxiv.org