Skip to content

h3z/FinRL

 
 

Repository files navigation

FinRL: Financial Reinforcement Learning twitter facebook google+ linkedin

                               Downloads Downloads Python 3.6 PyPI Documentation Status License

FinRL (website) is the first open-source framework to show the great potential of financial reinforcement learning.

FinRL has evolving into an ecosystem, including hundreds of financial markets, state-of-the-art algorithms, financial applications (portfolio allocation, cryptocurrency trading, high-frequency trading), live trading, cloud deployment, etc.

Roadmap Level Target Users Example Desription
0.0 (Preparation) preparation practitioners FinRL-Meta a playground
1.0 (Proof-of-Concept) entry-level beginners this repo demonstration, education
2.0 (Professional) intermediate-level full-stack developers, professionals ElegantRL financially optimized DRL algorithms
3.0 (Production) advance-level investment banks, hedge funds Podracer cloud-native solutions

Outline

Overview

FinRL framework has three layers: market environments, agents, and applications.

For a trading task (on the top), an agent (in the middle) interacts with a market environment (at the bottom), making sequential decisions.

Run Stock_NeurIPS2018.ipynb for a quick start.

A video FinRL at the AI4Finance Youtube Channel.

File Structure

The main folder finrl has three subfolders applications, agents, meta.

We employ a train-test-trade pipeline with three files: train.py, test.py, and trade.py.

FinRL
├── finrl (main folder)
│   ├── applications
│   	├── cryptocurrency_trading
│   	├── high_frequency_trading
│   	├── portfolio_allocation
│   	└── stock_trading
│   ├── agents
│   	├── elegantrl
│   	├── rllib
│   	└── stablebaseline3
│   ├── meta
│   	├── data_processors
│   	├── env_cryptocurrency_trading
│   	├── env_portfolio_allocation
│   	├── env_stock_trading
│   	├── preprocessor
│   	├── data_processor.py
│       ├── meta_config_tickers.py
│   	└── meta_config.py
│   ├── config.py
│   ├── config_tickers.py
│   ├── main.py
│   ├── plot.py
│   ├── train.py
│   ├── test.py
│   └── trade.py
│
├── tutorials (educational notebook files)
├── tests (unit tests to verify codes on env & data)
│   ├── environments
│   	└── test_env_cashpenalty.py
│   └── downloaders
│   	├── test_yahoodownload.py
│   	└── test_alpaca_downloader.py
├── setup.py
├── requirements.txt
└── README.md

Supported Data Sources

Data Source Type Range and Frequency Request Limits Raw Data Preprocessed Data
Alpaca US Stocks, ETFs 2015-now, 1min Account-specific OHLCV Prices&Indicators
Baostock CN Securities 1990-12-19-now, 5min Account-specific OHLCV Prices&Indicators
Binance Cryptocurrency API-specific, 1s, 1min API-specific Tick-level daily aggegrated trades, OHLCV Prices&Indicators
CCXT Cryptocurrency API-specific, 1min API-specific OHLCV Prices&Indicators
IEXCloud NMS US securities 1970-now, 1 day 100 per second per IP OHLCV Prices&Indicators
JoinQuant CN Securities 2005-now, 1min 3 requests each time OHLCV Prices&Indicators
QuantConnect US Securities 1998-now, 1s NA OHLCV Prices&Indicators
RiceQuant CN Securities 2005-now, 1ms Account-specific OHLCV Prices&Indicators
Tushare CN Securities, A share -now, 1 min Account-specific OHLCV Prices&Indicators
WRDS US Securities 2003-now, 1ms 5 requests each time Intraday Trades Prices&Indicators
YahooFinance US Securities Frequency-specific, 1min 2,000/hour OHLCV Prices&Indicators

OHLCV: open, high, low, and close prices; volume. adjusted_close: adjusted close price

Technical indicators: 'macd', 'boll_ub', 'boll_lb', 'rsi_30', 'dx_30', 'close_30_sma', 'close_60_sma'. Users also can add new features.

Installation

Status Update

Version History [click to expand]
  • 2021-08-25 0.3.1: pytorch version with a three-layer architecture, apps (financial tasks), drl_agents (drl algorithms), neo_finrl (gym env)
  • 2020-12-14 Upgraded to Pytorch with stable-baselines3; Remove tensorflow 1.0 at this moment, under development to support tensorflow 2.0
  • 2020-11-27 0.1: Beta version with tensorflow 1.5

Contributions

  • FinRL is the first open-source framework to demonstrate the great potential of financial reinforcement learning. It has evolved into an ecosystem.
  • The application layer provides interfaces for users to customize FinRL to their own trading tasks. Automated backtesting tool and performance metrics are provided to help quantitative traders iterate trading strategies at a high turnover rate. Profitable trading strategies are reproducible and hands-on tutorials are provided in a beginner-friendly fashion. Adjusting the trained models to the rapidly changing markets is also possible.
  • The agent layer provides state-of-the-art DRL algorithms that are adapted to finance with fine-tuned hyperparameters. Users can add new DRL algorithms.
  • The environment layer includes not only a collection of historical data APIs, but also live trading APIs. They are reconfigured into standard OpenAI gym-style environments. Moreover, it incorporates market frictions and allows users to customize the trading time granularity.

Tutorials

Publications

Title Conference Link Citations Year
FinRL-Meta: A Universe of Near-Real Market Environments for Data-Driven Deep Reinforcement Learning in Quantitative Finance NeurIPS 2021 Data-Centric AI Workshop paper: https://arxiv.org/abs/2112.06753 ;
code: https://github.com/AI4Finance-Foundation/FinRL-Meta
2 2021
Explainable deep reinforcement learning for portfolio management: An empirical approach ICAIF 2021 : ACM International Conference on AI in Finance paper: https://arxiv.org/abs/2111.03995;
code: https://github.com/AI4Finance-Foundation/FinRL
1 2021
FinRL-Podracer: High performance and scalable deep reinforcement learning for quantitative finance ICAIF 2021 : ACM International Conference on AI in Finance paper: https://arxiv.org/abs/2111.05188;
code: https://github.com/AI4Finance-Foundation/FinRL_Podracer
2 2021
FinRL: Deep reinforcement learning framework to automate trading in quantitative finance ICAIF 2021 : ACM International Conference on AI in Finance paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3955949;
code: https://github.com/AI4Finance-Foundation/FinRL
7 2021
FinRL: A deep reinforcement learning library for automated stock trading in quantitative finance NeurIPS 2020 Deep RL Workshop paper: https://arxiv.org/abs/2011.09607;
code: https://github.com/AI4Finance-Foundation/FinRL
25 2020
Deep reinforcement learning for automated stock trading: An ensemble strategy ICAIF 2020 : ACM International Conference on AI in Finance paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3690996;
repo: https://github.com/AI4Finance-Foundation/Deep-Reinforcement-Learning-for-Automated-Stock-Trading-Ensemble-Strategy-ICAIF-2020;
code: https://github.com/AI4Finance-Foundation/FinRL-Meta/blob/master/tutorials/2-Advance/FinRL_Ensemble_StockTrading_ICAIF_2020/FinRL_Ensemble_StockTrading_ICAIF_2020.ipynb
46 2020
Multi-agent reinforcement learning for liquidation strategy analysis ICML 2019 Workshop on AI in Finance: Applications and Infrastructure for Multi-Agent Learning paper: https://arxiv.org/abs/1906.11046;
repo: https://github.com/AI4Finance-Foundation/Liquidation-Analysis-using-Multi-Agent-Reinforcement-Learning-ICML-2019;
code: https://github.com/AI4Finance-Foundation/FinRL-Meta/blob/master/tutorials/2-Advance/execution_optimizing/execution_optimizing.ipynb
19 2019
Practical deep reinforcement learning approach for stock trading NeurIPS 2018 Workshop on Challenges and Opportunities for AI in Financial Services paper: https://arxiv.org/abs/1811.07522;
code: https://github.com/AI4Finance-Foundation/DQN-DDPG_Stock_Trading
87 2018

News

Citing FinRL

@article{finrl2020,
    author  = {Liu, Xiao-Yang and Yang, Hongyang and Chen, Qian and Zhang, Runjia and Yang, Liuqing and Xiao, Bowen and Wang, Christina Dan},
    title   = {{FinRL}: A deep reinforcement learning library for automated stock trading in quantitative finance},
    journal = {Deep RL Workshop, NeurIPS 2020},
    year    = {2020}
}
@article{liu2021finrl,
    author  = {Liu, Xiao-Yang and Yang, Hongyang and Gao, Jiechao and Wang, Christina Dan},
    title   = {{FinRL}: Deep reinforcement learning framework to automate trading in quantitative finance},
    journal = {ACM International Conference on AI in Finance (ICAIF)},
    year    = {2021}
}

We published FinTech papers. Please check Google Scholar. Closely related papers are given in the list.

Join and Contribute

Welcome to AI4Finance community!

Discuss FinRL via AI4Finance mailing list and AI4Finance Slack channel:

Follow us on WeChat:

Please check Contributing Guidances.

Contributors

Thank you!

Sponsorship

Welcome gift money to support AI4Finance, a non-profit community. Use the links on the right column, or scan the following vemo QR code:

Sponsorship records at Issue #425

Network: USDT-TRC20

LICENSE

MIT License

Disclaimer: Nothing herein is financial advice, and NOT a recommendation to trade real money. Please use common sense and always first consult a professional before trading or investing.

About

FinRL: Financial Reinforcement Learning. 🔥

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 96.9%
  • Python 3.1%