MLPro provides complete, standardized, and reusable functionalities to support your scientific research, educational tasks or industrial projects in machine learning.
- Overarching software infrastructure (mathematics, data management and plotting, UI framework, logging, ...)
- Fundamental ML classes for adaptive models and their training and hyperparameter tuning
- Powerful Environment templates for simulation, training and real operation
- Templates for single-agents, model-based agents (MBRL) with action planning to multi-agents (MARL)
- Advanced training/tuning funktionalities with separate evaluation and progress detection
- Growing pool of reuseable environments of automation and robotics
- Templates for native game theory regardless number of players and type of games
- Templates for multi-players in dynamic games, including game boards, players, and many more
- Reuse of advanced training/tuning classes and multi-agent environments of sub-package MLPro-RL
MLPro provides wrapper classes for:
- Environments of OpenAI Gym and PettingZoo
- Policy Algorithms of Stable Baselines 3
- Hyperparameter tuning with Hyperopt
The Documentation is available here: https://mlpro.readthedocs.io/
- Consequent object-oriented design and programming (OOD/OOP)
- Quality assurance by test-driven development
- Hosted and managed on GitHub
- Agile CI/CD approach with automated test and deployment
- Clean code paradigma
Project MLPro was started in 2021 by the Group for Automation Technology and Learning Systems at the South Westphalia University of Applied Sciences, Germany.
MLPro is designed and developed by Detlef Arend, Steve Yuwono, M Rizky Diprasetya, and further contributors.
If you want to contribute, please read CONTRIBUTING.md