This module provides tools to design experiments and benchmark multi-armed bandits policies. The idea is to be able to easily evaluate a new algorithm on multiple problems. Either generate a simulated dataset and evaluate online, or load a real dataset and evaluate offline.
The idea is to have the following workflow:
Choose a policy/policies, initialize them
Load dataset or generate synthetic samples
Initialize environment and benchmark parameters (n_runs, results_folder, ...)
Run simulation and get report as plots, comparative HTML&LaTeX table
environment
: defines global parameters, features, feedback graphs etc.policies
: examples policies for different classes of banditsdatasets
: fetch, load, generate dataevaluation
: setup and run evaluation and comparison loopsutils
: helper functions to generate plots and output formatted results (HTML, TeX)
pip install bitl
Download the source
git clone https://github.com/tritas/bitl.git
Install some dependencies
pip -r install requirements.txt
Install the package
python setup.py install
or
python setup.py develop (--user)
The examples
folder will showcase how to use the different functionality.