fast-llama is a super HIGH
-performance inference engine for LLMs like LLaMA (3x of llama.cpp
) written in pure C++
. It can run a 8-bit
quantized LLaMA2-7B
model on a cpu with 56 cores in speed of ~30 tokens / s
. It outperforms all current open-source inference engines, especially when compared to the renowned llama.cpp, with 2~3 times better inference speed on a CPU.
Why use Fast-LLaMA?
Fast
- Extremely fast on CPU.
Faster
than any other engines on Github including llama.cpp (3 times
faster than llama.cpp).
- Extremely fast on CPU.
Simple
- Totally less than 7k lines of C++ codes with well-orgnized code structures and no dependencies except NUMA (if needed for multi-cpus).
"Easy To Use"
(target☺️ )
⚠️ OnlyCPU
is supported currently. Support for GPU is coming soon.
Only Linux is supported currently. Support of other platforms including Windows, Mac, GPU is coming soon.
- GCC 10.x or newer versions
- CPU with AVX-512
- libnuma-dev
libraries like mpi, openblas, mkl, etc are NOT needed currently.
Method 1. Using the provided build script:
bash ./build.sh
Method 2. Using Make:
make -j 4
To run the inference engine, execute the following command:
Only gguf and llama2.c format of models are currently supported. Independent formort is coming soon.
./main -c ./models/cnllama-7b/ggml-model-f32.gguf -f gguf -j 56 -q int8 -n 200 -i 'That was a long long story happened in the ancient China.'
The command-line options are as follows:
-c
: Path to the model file-f
: Model file format (e.g., gguf)-j
: Number of threads to use (e.g., 56)-q
: Quantization mode (e.g., int8)-n
: Number of tokens to generate (e.g., 200)-i
: Input text (e.g., 'That was a long long story happened in the ancient China.')-h
: show usage information
fast-llama achieves a generation speed of approximately 25-30 tokens/s for an 8-bit quantized 7B model running on the following CPU configuration:
Architecture: x86_64
Model name: Intel(R) Xeon(R) Platinum 8350C CPU @ 2.60GHz
CPU(s): 112 (56 physical cores)
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
Latancy of first token will be optimized laterly.
Why is it so fast?
- Ultimate memory efficiency
- Zero memory allocations and frees during inferencing.
- Maximization of memory locality.
- Well-designed thread scheduling algorithm
- Optimized operators
- Fuse all operators that can be fused together
- Optmize calculation of several operators
- Proper Quantizations
fast-llama is licensed under the MIT.
We would like to express our gratitude to all contributors and users of FastLLaMA. Your support and feedback have been invaluable in making this project a success. If you encounter any issues or have any suggestions, please feel free to open an issue on the GitHub repository.
Email: 📩[email protected]
Contact me if you any questions.