A codebase for working with Open Pre-trained Transformers.
The OPT 125M--66B models are now available in Hugging Face Transformers. You can access them under the facebook
organization on the Hugging Face Hub
The OPT 125M--175B models are now supported in the Alpa project, which enables serving OPT-175B with more flexible parallelisms on older generations of GPUs, such as 40GB A100, V100, T4, M60, etc.
The OPT models are now supported in the Colossal-AI, which helps users to efficiently and quickly deploy OPT models training and inference, reducing large AI model budgets and scaling down the labor cost of learning and deployment.
The OPT 125M--66B models can be executed with CTranslate2, which is a fast inference engine for Transformer models. The project integrates the SmoothQuant technique to allow 8-bit quantization of OPT models. See the usage example to get started.
Follow setup instructions here to get started.
If you have any questions, bug reports, or feature requests regarding either the codebase or the models released in the projects section, please don't hesitate to post on our Github Issues page.
Please remember to follow our Code of Conduct.
We welcome PRs from the community!
You can find information about contributing to metaseq in our Contributing document.
Metaseq is currently maintained by the CODEOWNERS: Susan Zhang, Naman Goyal, Punit Singh Koura, Moya Chen, Kurt Shuster, Ruan Silva, David Esiobu, Igor Molybog, Peter Albert, Sharan Narang, Andrew Poulton, Nikolay Bashlykov, and Binh Tang.
Previous maintainers include: Stephen Roller, Anjali Sridhar, Christopher Dewan.
The majority of metaseq is licensed under the MIT license, however portions of the project are available under separate license terms:
- Megatron-LM is licensed under the Megatron-LM license