Skip to content

Instruct-tuning LLaMA on consumer hardware

License

Notifications You must be signed in to change notification settings

aspctu/alpaca-lora

 
 

Repository files navigation

Alpaca-Lora

Installation

pip install -r requirements.txt

Finetuning

To run fine-tuning in this repo, you can run:

python finetune.py \
    --data_path="path/to/your/data" \
    --micro_batch_size=8 \
    --batch_size=128 \
    --lr=3e-4 \
    --epochs=3 \
    --output_dir="lora-alpaca" \
    --model_pretrained_name="decapoda-research/llama-30b-hf"

Your data must be structured similar to the alpaca dataset (and in JSONL format). An example can be found here.

Inference

To run Alpaca-30B interactively in this repo, you can run:

python generate.py 

To run other versions of Alpaca (for example, the 7B), you can run:

python generate.py \
    --path_to_lora_adapters="tloen/alpaca-lora-7b" \
    --pretrained_model="decapoda-research/llama-7b-hf" \

Kudos

Thank you @tloen for the intial version of this repo which I forked and made some small changes too here.

About

Instruct-tuning LLaMA on consumer hardware

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 85.7%
  • Python 14.3%