Skip to content

Commit

Permalink
Fix llama2 scripts. (intelligent-machine-learning#953)
Browse files Browse the repository at this point in the history
* Fix llama2 scripts.

* Retrigger the CI/CD process.
  • Loading branch information
youxingling committed Jan 22, 2024
1 parent d2a8b3e commit ab27796
Show file tree
Hide file tree
Showing 4 changed files with 11 additions and 6 deletions.
10 changes: 5 additions & 5 deletions atorch/examples/llama2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,13 @@ cd dlrover/atorch/examples/llama2
pip install -r requirements.txt

# Configurable environment variable: DATASET_PATH, MODEL_NAME_OR_PATH, PER_DEVICE_TRAIN_BATCH_SIZE, etc.
sh fsdp_llama2_entry.sh
bash fsdp_llama2_entry.sh

# use fp8
USE_FP8=1 sh fsdp_llama2_entry.sh

# use lora
USE_LORA=1 sh fsdp_llama2_entry.sh
USE_LORA=1 bash fsdp_llama2_entry.sh
```

Note that transformer_engine is required for fp8. Your can use docker image <code>registry.cn-hangzhou.aliyuncs.com/atorch/atorch:pt210_te</code>, which has transformer_engine pre-installed.
Expand Down Expand Up @@ -387,8 +387,8 @@ pip install -r requirements.txt
# Configurable environment variable:
# DATASET_PATH, MODEL_NAME_OR_PATH, PIPELINE_PARALLEL_SIZE, MODEL_PARALLEL_SIZE, etc.
# e.x. in a 8-gpu system, to run mp-2 dp-2 pp-2,
# use `PIPELINE_PARALLEL_SIZE=2 MODEL_PARALLEL_SIZE=2 sh ds_3d_llama2_entry.sh`
sh ds_3d_llama2_entry.sh
# use `PIPELINE_PARALLEL_SIZE=2 MODEL_PARALLEL_SIZE=2 bash ds_3d_llama2_entry.sh`
bash ds_3d_llama2_entry.sh
```

### Performance
Expand Down Expand Up @@ -450,7 +450,7 @@ cd dlrover/atorch/examples/llama2

# Configurable environment variable: DATASET_PATH, MODEL_NAME_OR_PATH, PER_DEVICE_TRAIN_BATCH_SIZE, etc.
# Change BO_SG_MAX_IETR(the maximum search rounds of the BO), RANDOM_SAMPLE(the initial sampling steps of BO) if needed.
sh bayes_opt_sg_llama2_entry.sh
bash bayes_opt_sg_llama2_entry.sh

```

Expand Down
4 changes: 3 additions & 1 deletion atorch/examples/llama2/dataset_model.sh
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
#!/bin/bash

HOME=$(echo ~)

# Dataset path, would download in `example_utils.py` if not exist
Expand All @@ -17,7 +19,7 @@ if ! [[ -d $MODEL_NAME_OR_PATH && \
git clone https://github.com/shawwn/llama-dl.git
pushd llama-dl
sed 's/MODEL_SIZE="7B,13B,30B,65B"/MODEL_SIZE="'$MODEL_SIZE'"/g' llama.sh > llama$MODEL_SIZE.sh
sh llama$MODEL_SIZE.sh
bash llama$MODEL_SIZE.sh
pip install transformers sentencepiece
python -m transformers.models.llama.convert_llama_weights_to_hf --input_dir=. --model_size=$MODEL_SIZE --output_dir=$MODEL_NAME_OR_PATH
popd
Expand Down
1 change: 1 addition & 0 deletions atorch/examples/llama2/ds_3d_llama2_entry.sh
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
#!/bin/bash
set -x

source ./dataset_model.sh
Expand Down
2 changes: 2 additions & 0 deletions atorch/examples/llama2/fsdp_llama2_entry.sh
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
#!/bin/bash

set -x

source ./dataset_model.sh
Expand Down

0 comments on commit ab27796

Please sign in to comment.