Improved regret analysis for variance-adaptive linear bandits and horizon-free linear mixture mdps

Y Kim, I Yang, KS Jun - arXiv preprint arXiv:2111.03289, 2021 - arxiv.org
arXiv preprint arXiv:2111.03289, 2021arxiv.org
In online learning problems, exploiting low variance plays an important role in obtaining tight
performance guarantees yet is challenging because variances are often not known a priori.
Recently, considerable progress has been made by Zhang et al.(2021) where they obtain a
variance-adaptive regret bound for linear bandits without knowledge of the variances and a
horizon-free regret bound for linear mixture Markov decision processes (MDPs). In this
paper, we present novel analyses that improve their regret bounds significantly. For linear …
In online learning problems, exploiting low variance plays an important role in obtaining tight performance guarantees yet is challenging because variances are often not known a priori. Recently, considerable progress has been made by Zhang et al. (2021) where they obtain a variance-adaptive regret bound for linear bandits without knowledge of the variances and a horizon-free regret bound for linear mixture Markov decision processes (MDPs). In this paper, we present novel analyses that improve their regret bounds significantly. For linear bandits, we achieve where is the dimension of the features, is the time horizon, and is the noise variance at time step , and ignores polylogarithmic dependence, which is a factor of improvement. For linear mixture MDPs with the assumption of maximum cumulative reward in an episode being in , we achieve a horizon-free regret bound of where is the number of base models and is the number of episodes. This is a factor of improvement in the leading term and in the lower order term. Our analysis critically relies on a novel peeling-based regret analysis that leverages the elliptical potential `count' lemma.
arxiv.org