Skip to content

Commit

Permalink
Refactor dataset (#802)
Browse files Browse the repository at this point in the history
  • Loading branch information
Jintao-Huang committed May 6, 2024
1 parent b6c4f1b commit 9dc41ab
Show file tree
Hide file tree
Showing 75 changed files with 1,582 additions and 1,485 deletions.
28 changes: 14 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -506,20 +506,20 @@ The complete list of supported models and datasets can be found at [Supported Mo

| Dataset Type | Training Task | Documentation |
|--------------|:---------------|--------------------------------------------------------------- |
| General | Fine-tuning | 🔥ruozhiba, 🔥ms-bench, 🔥ms-bench-mini, 🔥alpaca-en(gpt4), 🔥alpaca-zh(gpt4), multi-alpaca-all, instinwild-en, instinwild-zh, cot-en, cot-zh, firefly-all-zh, instruct-en, gpt4all-en, sharegpt-en, sharegpt-zh, tulu-v2-sft-mixture, wikipedia-zh, open-orca, open-orca-gpt4, sharegpt-gpt4, 🔥sharegpt-gpt4-mini. |
| Agent | Fine-tuning | 🔥ms-agent, ms-agent-for-agentfabric-default, ms-agent-for-agentfabric-addition, damo-mini-agent-zh, damo-agent-zh, agent-instruct-all-en. |
| General | Human Alignment | 🔥hh-rlhf-cn, stack-exchange-paired, hh-rlhf-harmless-base, hh-rlhf-helpful-base, hh-rlhf-helpful-online, hh-rlhf-helpful-rejection-sampled, hh-rlhf-red-team-attempts, hh-rlhf-cn-harmless-base-cn, hh-rlhf-cn-helpful-base-cn, hh-rlhf-cn-harmless-base-en, hh-rlhf-cn-helpful-base-en. |
| Code | Fine-tuning | code-alpaca-en, 🔥leetcode-python-en, 🔥codefuse-python-en, 🔥codefuse-evol-instruction-zh. |
| Medical | Fine-tuning | medical-en, medical-zh, medical-mini-zh, 🔥disc-med-sft-zh. |
| Legal | Fine-tuning | lawyer-llama-zh, tigerbot-law-zh, 🔥disc-law-sft-zh. |
| Math | Fine-tuning | 🔥blossom-math-zh, school-math-zh, open-platypus-en. |
| SQL | Fine-tuning | text2sql-en, 🔥sql-create-context-en. |
| Text Generation | Fine-tuning | 🔥advertise-gen-zh, 🔥dureader-robust-zh. |
| Classification | Fine-tuning | cmnli-zh, 🔥cmnli-mini-zh, 🔥jd-sentiment-zh, 🔥hc3-zh, 🔥hc3-en. |
| Quantization Assist | Quantization | pileval. |
| Other | Fine-tuning | finance-en, poetry-zh, webnovel-zh, generated-chat-zh, cls-fudan-news-zh, ner-jave-zh. |
| Vision | Fine-tuning | coco-en, 🔥coco-mini-en, coco-mini-en-2, capcha-images. |
| Audio | Fine-tuning | aishell1-zh, 🔥aishell1-mini-zh. |
| General | Fine-tuning | 🔥ruozhiba, 🔥ms-bench, 🔥alpaca-en(gpt4), 🔥alpaca-zh(gpt4), multi-alpaca, instinwild, cot-en, cot-zh, firefly-zh, instruct-en, gpt4all-en, sharegpt, tulu-v2-sft-mixture, wikipedia-zh, open-orca, sharegpt-gpt4, deepctrl-sft, coig-cqia. |
| Agent | Fine-tuning | 🔥ms-agent, 🔥ms-agent-for-agentfabric, ms-agent-multirole, 🔥toolbench-for-alpha-umi, damo-agent-zh, damo-agent-zh-mini, agent-instruct-all-en. |
| General | Human Alignment | hh-rlhf, 🔥hh-rlhf-cn, stack-exchange-paired. |
| Code | Fine-tuning | code-alpaca-en, 🔥leetcode-python-en, 🔥codefuse-python-en, 🔥codefuse-evol-instruction-zh. |
| Medical | Fine-tuning | medical-en, medical-zh, 🔥disc-med-sft-zh. |
| Legal | Fine-tuning | lawyer-llama-zh, tigerbot-law-zh, 🔥disc-law-sft-zh. |
| Math | Fine-tuning | 🔥blossom-math-zh, school-math-zh, open-platypus-en. |
| SQL | Fine-tuning | text2sql-en, 🔥sql-create-context-en. |
| Text Generation | Fine-tuning | 🔥advertise-gen-zh, 🔥dureader-robust-zh. |
| Classification | Fine-tuning | cmnli-zh, 🔥jd-sentiment-zh, 🔥hc3-zh, 🔥hc3-en. |
| Quantization Assist | Quantization | pileval. |
| Other | Fine-tuning | finance-en, poetry-zh, webnovel-zh, generated-chat-zh, cls-fudan-news-zh, ner-jave-zh. |
| Vision | Fine-tuning | coco-en, 🔥coco-en-mini, coco-en-2, coco-en-2-mini, capcha-images. |
| Audio | Fine-tuning | aishell1-zh, 🔥aishell1-zh-mini. |

### Supported Technologies

Expand Down
14 changes: 7 additions & 7 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -505,20 +505,20 @@ CUDA_VISIBLE_DEVICES=0 swift deploy \

| 数据集类型 | 训练任务 | 文档 |
| ---------- | :------- | ------------------------------------------------------------ |
| 通用 | 微调 | 🔥ruozhiba, 🔥ms-bench, 🔥ms-bench-mini, 🔥alpaca-en(gpt4), 🔥alpaca-zh(gpt4), multi-alpaca-all, instinwild-en, instinwild-zh, cot-en, cot-zh, firefly-all-zh, instruct-en, gpt4all-en, sharegpt-en, sharegpt-zh, tulu-v2-sft-mixture, wikipedia-zh, open-orca, open-orca-gpt4, sharegpt-gpt4, 🔥sharegpt-gpt4-mini. |
| Agent | 微调 | 🔥ms-agent, ms-agent-for-agentfabric-default, ms-agent-for-agentfabric-addition, damo-mini-agent-zh, damo-agent-zh, agent-instruct-all-en. |
| 通用 | 人类对齐 | 🔥hh-rlhf-cn, stack-exchange-paired, hh-rlhf-harmless-base, hh-rlhf-helpful-base, hh-rlhf-helpful-online, hh-rlhf-helpful-rejection-sampled, hh-rlhf-red-team-attempts, hh-rlhf-cn-harmless-base-cn, hh-rlhf-cn-helpful-base-cn, hh-rlhf-cn-harmless-base-en, hh-rlhf-cn-helpful-base-en. |
| 通用 | 微调 | 🔥ruozhiba, 🔥ms-bench, 🔥alpaca-en(gpt4), 🔥alpaca-zh(gpt4), multi-alpaca, instinwild, cot-en, cot-zh, firefly-zh, instruct-en, gpt4all-en, sharegpt, tulu-v2-sft-mixture, wikipedia-zh, open-orca, sharegpt-gpt4, deepctrl-sft, coig-cqia. |
| Agent | 微调 | 🔥ms-agent, 🔥ms-agent-for-agentfabric, ms-agent-multirole, 🔥toolbench-for-alpha-umi, damo-agent-zh, damo-agent-zh-mini, agent-instruct-all-en. |
| 通用 | 人类对齐 | hh-rlhf, 🔥hh-rlhf-cn, stack-exchange-paired. |
| 代码 | 微调 | code-alpaca-en, 🔥leetcode-python-en, 🔥codefuse-python-en, 🔥codefuse-evol-instruction-zh. |
| 医疗 | 微调 | medical-en, medical-zh, medical-mini-zh, 🔥disc-med-sft-zh. |
| 医疗 | 微调 | medical-en, medical-zh, 🔥disc-med-sft-zh. |
| 法律 | 微调 | lawyer-llama-zh, tigerbot-law-zh, 🔥disc-law-sft-zh. |
| 数学 | 微调 | 🔥blossom-math-zh, school-math-zh, open-platypus-en. |
| SQL | 微调 | text2sql-en, 🔥sql-create-context-en. |
| 文本生成 | 微调 | 🔥advertise-gen-zh, 🔥dureader-robust-zh. |
| 分类 | 微调 | cmnli-zh, 🔥cmnli-mini-zh, 🔥jd-sentiment-zh, 🔥hc3-zh, 🔥hc3-en. |
| 分类 | 微调 | cmnli-zh, 🔥jd-sentiment-zh, 🔥hc3-zh, 🔥hc3-en. |
| 量化辅助 | 量化 | pileval. |
| 其他 | 微调 | finance-en, poetry-zh, webnovel-zh, generated-chat-zh, cls-fudan-news-zh, ner-jave-zh. |
| 视觉 | 微调 | coco-en, 🔥coco-mini-en, coco-mini-en-2, capcha-images. |
| 音频 | 微调 | aishell1-zh, 🔥aishell1-mini-zh. |
| 视觉 | 微调 | coco-en, 🔥coco-en-mini, coco-en-2, coco-en-2-mini, capcha-images. |
| 音频 | 微调 | aishell1-zh, 🔥aishell1-zh-mini. |

### 支持的技术

Expand Down
2 changes: 1 addition & 1 deletion docs/source/LLM/Benchmark.md
Original file line number Diff line number Diff line change
Expand Up @@ -720,7 +720,7 @@ swift sft \
|full|qwen-7b-chat|ms-agent|2.0|full||7721.3245(100.0000%)|True|True|lr=5e-05/epoch=2|73.53GiB|1.43(87543 samples/61022.97 seconds)|29.51(3382 tokens/114.62 seconds)|0.54|0.95|0.343|0.536|0.495|
|llamapro|qwen-7b-chat|ms-agent|2.0|llamapro|num_blocks=4|809.5826(9.4900%)|True|True|lr=5e-05/epoch=2|38.11GiB|1.53(87543 samples/57294.42 seconds)|25.80(2374 tokens/92.02 seconds)|0.53|1.00|0.434|0.645|0.357|
|lora+|qwen-7b-chat|ms-agent|2.0|lora|rank=8/target=ALL/alpha=32/lr_ratio=16.0/use_rslora=False/use_dora=False|17.8913(0.2312%)|True|True|lr=5e-05/epoch=2|32.35GiB|0.95(87543 samples/91923.80 seconds)|18.81(3329 tokens/176.94 seconds)|0.53|0.98|0.432|0.647|0.344|
|lora+neftune|qwen-7b-chat|ms-agent|2.0|lora|rank=8/target=ALL/alpha=32/lr_ratio=None/use_rslora=False/use_dora=Falseneftune_alpha=15.0|17.8913(0.2312%)|True|True|lr=5e-05/epoch=2|32.35GiB|0.96(87543 samples/91525.50 seconds)|19.84(161792 tokens/8156.02 seconds)|0.53|1.02|0.456|0.671|0.401|
|lora+neftune|qwen-7b-chat|ms-agent|2.0|lora|rank=8/target=ALL/alpha=32/lr_ratio=None/use_rslora=False/use_dora=False/neftune_noise_alpha=15.0|17.8913(0.2312%)|True|True|lr=5e-05/epoch=2|32.35GiB|0.96(87543 samples/91525.50 seconds)|19.84(161792 tokens/8156.02 seconds)|0.53|1.02|0.456|0.671|0.401|
|lora+no_mix|qwen-7b-chat|ms-agent|0.0|lora|rank=8/target=ALL/alpha=32/lr_ratio=None/use_rslora=False/use_dora=False|17.8913(0.2312%)|True|True|lr=5e-05/epoch=2|30.86GiB|0.91(29698 samples/32570.15 seconds)|19.89(36308 tokens/1825.26 seconds)|0.53|0.53|0.470|0.666|0.574|
|lora|qwen-7b-chat|ms-agent|2.0|lora|rank=8/target=ALL/alpha=32/lr_ratio=None/use_rslora=False/use_dora=False|17.8913(0.2312%)|True|True|lr=5e-05/epoch=2|32.35GiB|0.95(87543 samples/91974.29 seconds)|18.11(2415 tokens/133.32 seconds)|0.53|1.01|0.462|0.676|0.304|
|qwen-7b-chat-eval|qwen-7b-chat|None|0.0|None||None(None)||||None||30.81(13765 tokens/446.83 seconds)|||**0.517**|0.679|0.568|
Expand Down
13 changes: 4 additions & 9 deletions docs/source/LLM/LLM人类对齐训练文档.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,23 +32,18 @@ cd examples/pytorch/llm
# Memory usage: 4 * 20G,双卡device_map * 2ddp
nproc_per_node=2

PYTHONPATH=../../.. \
CUDA_VISIBLE_DEVICES=0,1,2,3 \
torchrun \
--nproc_per_node=$nproc_per_node \
--master_port 29500 \
llm_dpo.py \
NPROC_PER_NODE=$nproc_per_node \
MASTER_PORT=29500 \
swift dpo \
--model_type yi-6b-chat \
--ref_model_type yi-6b-chat \
--model_revision master \
--sft_type lora \
--tuner_backend swift \
--dtype AUTO \
--output_dir output \
--dataset hh-rlhf-cn-harmless-base-cn \
--train_dataset_sample -1 \
--truncation_strategy truncation_left \
--val_dataset_sample 2000 \
--dataset hh-rlhf-cn:harmless_base_cn \
--num_train_epochs 3 \
--max_length 1024 \
--max_prompt_length 512 \
Expand Down
17 changes: 5 additions & 12 deletions docs/source/LLM/Qwen1.5全流程最佳实践.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,15 +187,14 @@ from swift.llm import DatasetName, ModelType, SftArguments, sft_main

sft_args = SftArguments(
model_type=ModelType.qwen1half_7b_chat,
dataset=[DatasetName.alpaca_zh, DatasetName.alpaca_en],
train_dataset_sample=1000,
dataset=[f'{DatasetName.alpaca_zh}#500', f'{DatasetName.alpaca_en}#500',
f'{DatasetName.self_cognition}#500'],
logging_steps=5,
max_length=2048,
learning_rate=5e-5,
warmup_ratio=0.4,
output_dir='output',
lora_target_modules=['ALL'],
self_cognition_sample=500,
model_name=['小黄', 'Xiao Huang'],
model_author=['魔搭', 'ModelScope'])
output = sft_main(sft_args)
Expand All @@ -212,15 +211,13 @@ print(f'best_model_checkpoint: {best_model_checkpoint}')
CUDA_VISIBLE_DEVICES=0,1 \
swift sft \
--model_type qwen1half-7b-chat \
--dataset alpaca-zh alpaca-en \
--train_dataset_sample 1000 \
--dataset alpaca-zh#500 alpaca-en#500 self-cognition#500 \
--logging_steps 5 \
--max_length 2048 \
--learning_rate 5e-5 \
--warmup_ratio 0.4 \
--output_dir output \
--lora_target_modules ALL \
--self_cognition_sample 500 \
--model_name 小黄 'Xiao Huang' \
--model_author 魔搭 ModelScope \
```
Expand All @@ -233,15 +230,13 @@ CUDA_VISIBLE_DEVICES=0,1,2,3 \
NPROC_PER_NODE=4 \
swift sft \
--model_type qwen1half-7b-chat \
--dataset alpaca-zh alpaca-en \
--train_dataset_sample 1000 \
--dataset alpaca-zh#500 alpaca-en#500 self-cognition#500 \
--logging_steps 5 \
--max_length 2048 \
--learning_rate 5e-5 \
--warmup_ratio 0.4 \
--output_dir output \
--lora_target_modules ALL \
--self_cognition_sample 500 \
--model_name 小黄 'Xiao Huang' \
--model_author 魔搭 ModelScope \
--deepspeed default-zero2 \
Expand Down Expand Up @@ -484,15 +479,13 @@ CUDA_VISIBLE_DEVICES=0,1,2,3 \
NPROC_PER_NODE=4 \
swift sft \
--model_type qwen1half-72b-chat \
--dataset alpaca-zh alpaca-en \
--train_dataset_sample 1000 \
--dataset alpaca-zh#500 alpaca-en#500 self-cognition#500 \
--logging_steps 5 \
--max_length 4096 \
--learning_rate 5e-5 \
--warmup_ratio 0.4 \
--output_dir output \
--lora_target_modules ALL \
--self_cognition_sample 500 \
--model_name 小黄 'Xiao Huang' \
--model_author 魔搭 ModelScope \
--deepspeed default-zero3 \
Expand Down
Loading

0 comments on commit 9dc41ab

Please sign in to comment.