Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
CoderLSF committed Nov 16, 2023
1 parent 3e2c3e9 commit d867976
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Fast-LLaMA: A High-Performance Inference Engine
<p align="center"><img width="600" alt="image" src="https://github.com/CoderLSF/fast-llama/assets/65639063/18165904-bf17-4a2e-910b-36e096a774d8"></p>
<p align="center"><img width="1022" alt="image" src="https://github.com/CoderLSF/fast-llama/assets/65639063/8c3eefc8-0db0-4cb1-8e78-58acc7cf77e3"></p>

## Descriptions
fast-llama is a super `HIGH`-performance inference engine for LLMs like LLaMA (**3x** of `llama.cpp`) written in `pure C++`. It can run a **`8-bit`** quantized **`LLaMA2-7B`** model on a cpu with 56 cores in speed of **`~30 tokens / s`**. It outperforms all current open-source inference engines, especially when compared to the renowned llama.cpp, with 2~3 times better inference speed on a CPU.
Expand Down

0 comments on commit d867976

Please sign in to comment.