Skip to content
/ Aurora Public
forked from WangRongsheng/Aurora

Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain.

License

Notifications You must be signed in to change notification settings

alfgo/Aurora

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

60 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Aurora: Activating chinese chat capability for Mistral-8x7B sparse Mixture-of-Experts through Instruction-Tuning

Note

We apologize for the misnaming of the paper due to our mistake: Mixtral-8x7B-Instruct-v0.1 was incorrectly named Mistral-8x7B, and Mix and Mis do not seem to be the same thing. We will make a correction in the next release.

Rongsheng Wang, Haoming Chen, Ruizhe Zhou, Yaofei Duan, Kunyan Cai, Han Ma, Jiaxi Cui, Jian Li, Patrick Cheong-Iao Pang, Yapeng Wang, Tao Tan☨

☨Corresponding author

Overview

Existing research has demonstrated that refining large language models (LLMs) through the utilization of machine-generated instruction-following data empowers these models to exhibit impressive zero-shot capabilities for novel tasks, without requiring human-authored instructions. In this paper, we systematically investigate, preprocess, and integrate three Chinese instruction-following datasets with the aim of enhancing the Chinese conversational capabilities of Mixtral-8x7B sparse Mixture-of-Experts model. Through instruction fine-tuning on this carefully processed dataset, we successfully construct the Mixtral-8x7B sparse Mixture-of-Experts model named "Aurora." To assess the performance of Aurora, we utilize three widely recognized benchmark tests: C-Eval, MMLU, and CMMLU. Empirical studies validate the effectiveness of instruction fine-tuning applied to Mixtral-8x7B sparse Mixture-of-Experts model. This work is pioneering in the execution of instruction fine-tuning on a sparse expert-mixed model, marking a significant breakthrough in enhancing the capabilities of this model architecture.

Evaluation

It is known that LLM evaluation remains a significant challenge. We use three public benchmarks in our study.

Model Checkpoints BLEU-4 ROUGE-1 ROUGE-2 ROUGE-l
checkpoints-6000 18.4134 38.2669 18.9526 26.572
checkpoints-8000 18.3351 38.4327 19.058 26.6573
checkpoints-8000 18.5638 38.5497 19.1992 26.8305
checkpoints-12000 18.7156 38.7787 19.3347 27.0613
checkpoints-14000 18.5194 38.6898 19.2032 26.8863

Next are some references we gave you about GPU memory usage during the training and inference stage. Please note that we did all inference and training on a single GPU.

Stage GPU Memory Usage
Training ~43 GiB
Inference ~25 GiB

Easy-to-Use

1. Clone and Set up

https://github.com/WangRongsheng/Aurora.git
cd Aurora
pip install -r requirements.txt

2. Download Model

Base Model:

Model Download
Mixtral-8x7B-Instruct-v0.1 [HuggingFace] [HuggingFace-mirror] [ModelScope]

LoRA Model:

Model Download
Aurora [HuggingFace] [ModelScope]

The huge model parameters are not convenient for you to manage your task, so we provide LoRA weights, which will be merged with the base model before inference. You don't have to worry about it.

3. Inference

Web:

CUDA_VISIBLE_DEVICES=0 python src/web_demo.py \
    --model_name_or_path ./Mixtral-8x7B-Instruct-v0.1 \
    --checkpoint_dir Aurora \
    --finetuning_type lora \
    --quantization_bit 4 \
    --template mistral

Then you can visit: http:https://127.0.0.1:7860/

CLI:

CUDA_VISIBLE_DEVICES=0 python src/cli_demo.py \
    --model_name_or_path ./Mixtral-8x7B-Instruct-v0.1 \
    --checkpoint_dir Aurora \
    --finetuning_type lora \
    --quantization_bit 4 \
    --template mistral

API:

CUDA_VISIBLE_DEVICES=0 python src/api_demo.py \
    --model_name_or_path ./Mixtral-8x7B-Instruct-v0.1 \
    --checkpoint_dir Aurora \
    --finetuning_type lora \
    --quantization_bit 4 \
    --template mistral

If you need to load weights for specific checkpoints, you can set them up like this: --checkpoint_dir Aurora/checkpoint-5000.

Train

If you have a single GPU and its GPU memory size is larger than 48GB, you can train your own models.

Train your MoE model
CUDA_VISIBLE_DEVICES=5 python   src/train_bash.py \
    --stage sft \
    --model_name_or_path ./Mixtral-8x7B-Instruct-v0.1 \
    --do_train \
    --dataset alpaca_zh,alpaca_gpt4_zh,sharegpt \
    --finetuning_type lora \
    --quantization_bit 4 \
    --overwrite_cache \
    --output_dir output/ \
    --per_device_train_batch_size 2 \
    --gradient_accumulation_steps 4 \
    --lr_scheduler_type cosine \
    --logging_steps 100 \
    --save_steps 1000 \
    --learning_rate 5e-5 \
    --num_train_epochs 3.0 \
    --plot_loss \
    --fp16 \
    --template mistral \
    --lora_target q_proj,v_proj

--quantization_bit 4 means you will use QLoRA, If you have a larger GPU memory size you can remove it and use LoRA.

Evaluation your MoE model
CUDA_VISIBLE_DEVICES=0 python src/evaluate.py \
    --model_name_or_path ./Mixtral-8x7B-Instruct-v0.1 \
    --checkpoint_dir Aurora/checkpoint-5000 \
    --finetuning_type lora \
    --quantization_bit 4 \
    --template mistral \
    --task cmmlu \ # cmmlu, mmlu, ceval
    --split test \
    --lang en \ # zh, en
    --n_shot 5 \
    --batch_size 8

Results

Acknowledgments

This work is mainly done by the Faculty of Applied Sciences of the Macao Polytechnic University. The computational resources used in this work were obtained from AWS servers. The fine-tuning framework we used is LLaMA-Factory, which brings a lot of convenience to our work. We also thank the public datasets from the open source community, such as shareAI, stanford_alpaca and GPT-4-LLM. Most importantly we are very grateful to Mistral AI, who are leading a new technology boom that will dramatically change the future of technology development.

Citation

If you find our work helpful, feel free to give us a cite.

@misc{wang2023auroraactivating,
      title={Aurora:Activating Chinese chat capability for Mistral-8x7B sparse Mixture-of-Experts through Instruction-Tuning}, 
      author={Rongsheng Wang and Haoming Chen and Ruizhe Zhou and Yaofei Duan and Kunyan Cai and Han Ma and Jiaxi Cui and Jian Li and Patrick Cheong-Iao Pang and Yapeng Wang and Tao Tan},
      year={2023},
      eprint={2312.14557},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

License

Please follow the Apache 2.0 License.

About

Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Python 100.0%