Skip to content

Randomized Greedy Learning Under Full-bandit Feedback

Notifications You must be signed in to change notification settings

fouratifares/RGL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RGL: Randomized Greedy Learning for Non-monotone Stochastic Submodular Maximization Under Full-bandit Feedback

We investigate the problem of unconstrained combinatorial multi-armed bandits with bandit feedback and stochastic rewards for submodular maximization. Previous works investigate the same problem assuming a submodular and monotone reward function. In this work, we study a more general problem, i.e., when the reward function is not necessarily monotone, and the submodularity is assumed only in expectation. We propose Randomized Greedy Learning (RGL) algorithm and theoretically prove that it achieves sublinear regret upper bound. We also show in experiments that RGL empirically outperforms other bandit variants in submodular and non-submodular settings.

RGL Paper

Paper PDF

Citing the Project

To cite this repository in publications:

@inproceedings{fourati2023randomized,
  title={Randomized greedy learning for non-monotone stochastic submodular maximization under full-bandit feedback},
  author={Fourati, Fares and Aggarwal, Vaneet and Quinn, Christopher and Alouini, Mohamed-Slim},
  booktitle={International Conference on Artificial Intelligence and Statistics},
  pages={7455--7471},
  year={2023},
  organization={PMLR}
}