Skip to content

Runs LLaMA with Extremely HIGH speed

License

Notifications You must be signed in to change notification settings

kugoucode/fast-llama

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fast-LLaMA: A High-Performance Inference Engine

image

Descriptions

fast-llama is a super HIGH-performance inference engine for LLMs like LLaMA (3x of llama.cpp) written in pure C++. It can run a 8-bit quantized LLaMA2-7B model on a cpu with 56 cores in speed of ~30 tokens / s. It outperforms all current open-source inference engines, especially when compared to the renowned llama.cpp, with 2~3 times better inference speed on a CPU.

Advantages

Why use Fast-LLaMA?

  • Fast
    • Extremely fast on CPU. Faster than any other engines on Github including llama.cpp (3 times faster than llama.cpp).
  • Simple
    • Totally less than 7k lines of C++ codes with well-orgnized code structures and no dependencies except NUMA (if needed for multi-cpus).
  • "Easy To Use" (target ☺️

⚠️ Only CPU is supported currently. Support for GPU is coming soon.

Quick Start

Compile

Only Linux is supported currently. Support of other platforms including Windows, Mac, GPU is coming soon.

Requirements

  • GCC 10.x or newer versions
  • CPU with AVX-512
  • libnuma-dev

libraries like mpi, openblas, mkl, etc are NOT needed currently.

Compilation

Method 1. Using the provided build script:

bash ./build.sh

Method 2. Using Make:

make -j 4

Run

To run the inference engine, execute the following command:

Only gguf and llama2.c format of models are currently supported. Independent formort is coming soon.

./main -c ./models/cnllama-7b/ggml-model-f32.gguf -f gguf -j 56 -q int8 -n 200 -i 'That was a long long story happened in the ancient China.'

The command-line options are as follows:

  • -c: Path to the model file
  • -f: Model file format (e.g., gguf)
  • -j: Number of threads to use (e.g., 56)
  • -q: Quantization mode (e.g., int8)
  • -n: Number of tokens to generate (e.g., 200)
  • -i: Input text (e.g., 'That was a long long story happened in the ancient China.')
  • -h: show usage information

Performance

fast-llama achieves a generation speed of approximately 25-30 tokens/s for an 8-bit quantized 7B model running on the following CPU configuration:

Architecture:            x86_64
Model name:              Intel(R) Xeon(R) Platinum 8350C CPU @ 2.60GHz
CPU(s):                  112 (56 physical cores)
Thread(s) per core:      2
Core(s) per socket:      28
Socket(s):               2

image

2a58bda471f0aa2770f349dba73a530d

Latancy of first token will be optimized laterly.

Why

Why is it so fast?

  • Ultimate memory efficiency
    • Zero memory allocations and frees during inferencing.
    • Maximization of memory locality.
  • Well-designed thread scheduling algorithm
  • Optimized operators
    • Fuse all operators that can be fused together
    • Optmize calculation of several operators
  • Proper Quantizations

License

fast-llama is licensed under the MIT.

Acknowledgements

We would like to express our gratitude to all contributors and users of FastLLaMA. Your support and feedback have been invaluable in making this project a success. If you encounter any issues or have any suggestions, please feel free to open an issue on the GitHub repository.

Contact

Email: 📩[email protected]

Contact me if you any questions.

About

Runs LLaMA with Extremely HIGH speed

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 99.1%
  • Other 0.9%