Traversing pareto optimal policies: Provably efficient multi-objective reinforcement learning

S Qiu, D Zhang, R Yang, B Lyu, T Zhang - arXiv preprint arXiv:2407.17466, 2024 - arxiv.org
arXiv preprint arXiv:2407.17466, 2024arxiv.org
This paper investigates multi-objective reinforcement learning (MORL), which focuses on
learning Pareto optimal policies in the presence of multiple reward functions. Despite
MORL's significant empirical success, there is still a lack of satisfactory understanding of
various MORL optimization targets and efficient learning algorithms. Our work offers a
systematic analysis of several optimization targets to assess their abilities to find all Pareto
optimal policies and controllability over learned policies by the preferences for different …
This paper investigates multi-objective reinforcement learning (MORL), which focuses on learning Pareto optimal policies in the presence of multiple reward functions. Despite MORL's significant empirical success, there is still a lack of satisfactory understanding of various MORL optimization targets and efficient learning algorithms. Our work offers a systematic analysis of several optimization targets to assess their abilities to find all Pareto optimal policies and controllability over learned policies by the preferences for different objectives. We then identify Tchebycheff scalarization as a favorable scalarization method for MORL. Considering the non-smoothness of Tchebycheff scalarization, we reformulate its minimization problem into a new min-max-max optimization problem. Then, for the stochastic policy class, we propose efficient algorithms using this reformulation to learn Pareto optimal policies. We first propose an online UCB-based algorithm to achieve an learning error with an sample complexity for a single given preference. To further reduce the cost of environment exploration under different preferences, we propose a preference-free framework that first explores the environment without pre-defined preferences and then generates solutions for any number of preferences. We prove that it only requires an exploration complexity in the exploration phase and demands no additional exploration afterward. Lastly, we analyze the smooth Tchebycheff scalarization, an extension of Tchebycheff scalarization, which is proved to be more advantageous in distinguishing the Pareto optimal policies from other weakly Pareto optimal policies based on entry values of preference vectors. Furthermore, we extend our algorithms and theoretical analysis to accommodate this optimization target.
arxiv.org