This repository contains code for training, finetuning, evaluating, and deploying LLMs for inference with Composer and the MosaicML platform. Designed to be easy-to-use, efficient and flexible, this codebase is designed to enable rapid experimentation with the latest techniques.
You'll find in this repo:
llmfoundry/
- source code for models, datasets, callbacks, utilities, etc.scripts/
- scripts to run LLM workloadsdata_prep/
- convert text data from original sources to StreamingDataset formattrain/
- train or finetune HuggingFace and MPT models from 125M - 70B parameterstrain/benchmarking
- profile training throughput and MFU
inference/
- convert models to HuggingFace or ONNX format, and generate responsesinference/benchmarking
- profile inference latency and throughput
eval/
- evaluate LLMs on academic (or custom) in-context-learning tasks
mcli/
- launch any of these workloads using MCLI and the MosaicML platform
MPT-7B is a GPT-style model, and the first in the MosaicML Foundation Series of models. Trained on 1T tokens of a MosaicML-curated dataset, MPT-7B is open-source, commercially usable, and equivalent to LLaMa 7B on evaluation metrics. The MPT architecture contains all the latest techniques on LLM modeling -- Flash Attention for efficiency, Alibi for context length extrapolation, and stability improvements to mitigate loss spikes. The base model and several variants, including a 64K context length fine-tuned model (!!) are all available:
Model | Context Length | Download | Demo | Commercial use? |
---|---|---|---|---|
MPT-7B | 2048 | https://huggingface.co/mosaicml/mpt-7b | Yes | |
MPT-7B-Instruct | 2048 | https://huggingface.co/mosaicml/mpt-7b-instruct | Demo | Yes |
MPT-7B-Chat | 2048 | https://huggingface.co/mosaicml/mpt-7b-chat | Demo | No |
MPT-7B-StoryWriter | 65536 | https://huggingface.co/mosaicml/mpt-7b-storywriter | Yes |
To try out these models locally, follow the instructions in scripts/inference/README.md
to prompt HF models using our hf_generate.py or hf_chat.py scripts.
- Blog: Introducing MPT-7B
- Blog: Benchmarking LLMs on H100
- Blog: Blazingly Fast LLM Evaluation
- Blog: GPT3 Quality for $500k
- Blog: Billion parameter GPT training made easy
Here's what you need to get started with our LLM stack:
- Use a Docker image with PyTorch 1.13+, e.g. MosaicML's PyTorch base image
- Recommended tag:
mosaicml/pytorch:1.13.1_cu117-python3.10-ubuntu20.04
- This image comes pre-configured with the following dependencies:
- PyTorch Version: 1.13.1
- CUDA Version: 11.7
- Python Version: 3.10
- Ubuntu Version: 20.04
- FlashAttention kernels from HazyResearch
- Recommended tag:
- Use a system with NVIDIA GPUs
To get started, clone this repo and install the requirements:
git clone https://github.com/mosaicml/llm-foundry.git
cd llm-foundry
pip install -e ".[gpu]" # or pip install -e . if no NVIDIA GPU
Here is an end-to-end workflow for preparing a subset of the C4 dataset, training an MPT-125M model for 10 batches, converting the model to HuggingFace format, evaluating the model on the Winograd challenge, and generating responses to prompts.
If you have a write-enabled HuggingFace auth token, you can optionally upload your model to the Hub! Just export your token like this:
export HUGGING_FACE_HUB_TOKEN=your-auth-token
and uncomment the line containing --hf_repo_for_upload ...
.
(Remember this is a quickstart just to demonstrate the tools -- To get good quality, the LLM must be trained for longer than 10 batches 😄)
cd scripts
# Convert C4 dataset to StreamingDataset format
python data_prep/convert_dataset_hf.py \
--dataset c4 --data_subset en \
--out_root my-copy-c4 --splits train_small val_small \
--concat_tokens 2048 --tokenizer EleutherAI/gpt-neox-20b --eos_text '<|endoftext|>'
# Train an MPT-125m model for 10 batches
composer train/train.py \
train/yamls/mpt/125m.yaml \
data_local=my-copy-c4 \
train_loader.dataset.split=train_small \
eval_loader.dataset.split=val_small \
max_duration=10ba \
eval_interval=0 \
save_folder=mpt-125m
# Convert the model to HuggingFace format
python inference/convert_composer_to_hf.py \
--composer_path mpt-125m/ep0-ba10-rank0.pt \
--hf_output_path mpt-125m-hf \
--output_precision bf16 \
# --hf_repo_for_upload user-org/repo-name
# Evaluate the model on Winograd
python eval/eval.py \
eval/yamls/hf_eval.yaml \
icl_tasks=eval/yamls/winograd.yaml \
model_name_or_path=mpt-125m-hf
# Generate responses to prompts
python inference/hf_generate.py \
--name_or_path mpt-125m-hf \
--max_new_tokens 256 \
--prompts \
"The answer to life, the universe, and happiness is" \
"Here's a quick recipe for baking chocolate chip cookies: Start by"
If you run into any problems with the code, please file Github issues directly to this repo.
If you want to train LLMs on the MosaicML platform, reach out to us at [email protected]!