Python library for Multi-Armed Bandits
Implements the following algorithms:
- Epsilon-Greedy
- UCB1
- Softmax
- Thompson Sampling (Bayesian)
- Bernoulli, Binomial <=> Beta Distributions
- When to Run Bandit Tests Instead of A/B/n Tests
- Bandit theory, part I
- Bandit theory, part II
- Bandits for Recommendation Systems
- Recommendations with Thompson Sampling
- Personalization with Contextual Bandits
- Bayesian Bandits - optimizing click throughs with statistics
- Mulit-Armed Bandits
- Bayesian Bandits
- Python Multi-armed Bandits (and Beer!)
- Boston Bayesians Meetup 2016 - Bayesian Bandits From Scratch
- ODSC East 2016 - Bayesian Bandits
- NYC ML Meetup 2010 - Learning for Contextual Bandits