This repository will not be updated, please refer to https://github.com/horseee/LLM-Pruner.
This repository provides minimal examples of pruning Large Language Models (LLMs).
LLMs, characterized by their incredibly large number of parameters and computational demands, often present huge challenges to downstream applications. Structural Pruning offers a potential solution to this issue by removing parameters from models. To this end, this project aims to build a straightforward and general pipeline for the pruning of LLaMA and other LLMs.
Available Features:
- Layer Pruner for basic layers in LLaMA.
- Random Pruning for LLaMA-7B.
- L1/L2 Pruning for LLaMA-7B.
TODO List:
- Support LlamaForCausalLM in huggingface/transformers
- Code for finetuning and post-training of the pruned model.
- Quantative results.
- Structural Pruning for LLaMA-13B/33B/65B.
- More pruners: Sailency-based Pruning.
- More LLMs.
pip install -r requirements.txt
Prepare pretrained models following the official instructions.
This example expects the following files to be available:
ckpt
└── LLaMA
├── 7B
│ ├── checklist.chk
│ ├── consolidated.00.pth
│ └── params.json
├── tokenizer_checklist.chk
└── tokenizer.model
- #Params: 6.73B => 1.72B
- GPU RAM: 22,067M => 7,781 M
- Requires ~20GB GPU memory on a single 3090 to prune the model. Small pruning ratio may require more GPU memory.
Pruning: The following script globally removes 50% of the dimensions of the LLaMA-7B model, resulting in a lightweight model with 1.72B parameters. Specify the pruner type with --pruner_type <l1/l2/random>
and pruning ratio with --pruning_ratio 0.5
.
python -m torch.distributed.launch --master_port 18101 --nproc_per_node 1 prune.py --pruner_type l1 --ckpt_dir ckpt/LLaMA/7B/ --tokenizer_path ckpt/LLaMA/tokenizer.model --pruning_ratio 0.5 --save_ckpt_name 'llama_prune_1.7B'
Testing: After pruning, we can load and test the pruned model with some pre-defined prompts.
python -m torch.distributed.launch --master_port 18101 --nproc_per_node 1 test_prune_model.py --save_ckpt_name llama_prune_1.7B --tokenizer_path ckpt/LLaMA/tokenizer.model
Please modify the ckpt_dir
and tokenizer_path
according to the path to your LLaMA weights.
The pruning impacts the model's structure and hurts the performance of the model, necessitating the post-training or fine-tuning of the model on the downstream tasks. We are still developing code for this part.
The LLaMA model is adapted from facebookresearch/llama.
Structural Pruning is powered by VainF/Torch-Pruning.
@article{fang2023depgraph,
title={DepGraph: Towards Any Structural Pruning},
author={Fang, Gongfan and Ma, Xinyin and Song, Mingli and Mi, Michael Bi and Wang, Xinchao},
journal={The IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2023}
}
@article{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}