llama2.npy
is a Python+Numpy port of llama2.c by Andrej Karpathy. It includes an implementation of the baby LLama2 architecture trained on TinyStories and the accompanying tokenizer all using just Python and Numpy. The model weights and tokenizer scores are also from Andrej Karpathy.
This repo is a companion to my blog: llama2.npy : Implementing Llama2 LLM using just Python and Numpy
Generating text from the baby LLama2 modelTo get started, clone this repository to your local machine using the following command:
git clone https://github.com/jayeshmahapatra/llama2.npy
Make sure you have Numpy installed. You can install a specific version (1.23.5) with the following command:
pip install numpy==1.23.5
You can generate text using the LLama2 model by running the following command in your terminal:
python run_npy.py -i "Once upon a time" -w weights/stories15M.bin -n 20
Here's an explanation of the command line arguments for run_npy
:
-
-i
or--input
: Specify the input prompt for text generation. By default, it is set to "Once upon a time." -
-n
or--num_tokens
: Define the number of tokens you want the model to generate. The default value is 20. -
-w
or--weight
: Provide the path to the binary file containing the model weights. This argument is required for the model to work properly.
Code Files:
- model_npy: Contains the implementation of the model, along with text generation code.
- run_npy: Contains code to create a model, load it's weights and then run inference.
- tokenizer_npy: Contains python implementation of a BPE tokenizer.
Weight Files:
- tokenizer.bin: Saved Llama2 tokenizer weights
- stories15M.bin: Saved weights for a baby llama model trained on the TinyStories dataset.