Skip to content

Commit

Permalink
added tiny llama examples for lora and qlora (axolotl-ai-cloud#1027)
Browse files Browse the repository at this point in the history
* added tiny llama examples for lora and qlora

* corrected yml files and removed tiny-llama.yml from llama-2 example
  • Loading branch information
tdolan21 committed Jan 3, 2024
1 parent 4d2e842 commit c75f916
Show file tree
Hide file tree
Showing 3 changed files with 87 additions and 6 deletions.
17 changes: 17 additions & 0 deletions examples/tiny-llama/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Overview

This is a simple example of how to finetune TinyLlama1.1B using either lora or qlora:

LoRa:

```
accelerate launch -m axolotl.cli.train examples/tiny-llama/lora.yml
```

qLoRa:

```
accelerate launch -m axolotl.cli.train examples/tiny-llama/qlora.yml
```

Both take about 10 minutes to complete on a 4090.
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
base_model: PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T

base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
Expand All @@ -17,6 +16,7 @@ output_dir: ./lora-out

sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true

adapter: lora
lora_model_dir:
Expand Down Expand Up @@ -55,14 +55,11 @@ flash_attention: true

warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"

67 changes: 67 additions & 0 deletions examples/tiny-llama/qlora.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true

load_in_8bit: false
load_in_4bit: true
strict: false

datasets:
- path: mhenrichsen/alpaca_2k_test
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./qlora-out

adapter: qlora
lora_model_dir:

sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true

lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:

wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 4
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002

train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:

0 comments on commit c75f916

Please sign in to comment.