Skip to content

Run inference on replit-3B code instruct model using CPU

License

Notifications You must be signed in to change notification settings

alamin655/replit-3B-inference

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Replit Code Instruct inference using CPU

Run inference on the replit code instruct model using your CPU. This inference code uses a ggml quantized model. To run the model we'll use a library called ctransformers that has bindings to ggml in python.

Demo:

2023-06-27.14-46-07.mp4

Requirements

Using docker should make all of this easier for you. Minimum specs, system with 8GB of ram. Recommend to use python 3.10.

Tested working on

Will post some numbers for these two later.

  • AMD Epyc 7003 series CPU
  • AMD Ryzen 5950x CPU

Setup

First create a venv.

python -m venv env && source env/bin/activate

Next install dependencies.

pip install -r requirements.txt

Next download the quantized model weights (about 1.5GB).

python download_model.py

Ready to rock, run inference.

python inference.py

Next modify inference script prompt and generation parameters.

About

Run inference on replit-3B code instruct model using CPU

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%