LocalAI is a drop-in replacement REST API thatβs compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.
For a list of the supported model families, please see the model compatibility table.
In a nutshell:
- Local, OpenAI drop-in alternative REST API. You own your data.
- NO GPU required. NO Internet access is required either. Optional, GPU Acceleration is available in
llama.cpp
-compatible LLMs. See building instructions. - Supports multiple models, Audio transcription, Text generation with GPTs, Image generation with stable diffusion (experimental)
- Once loaded the first time, it keep models loaded in memory for faster inference
- Doesn't shell-out, but uses C++ bindings for a faster inference and better performance.
LocalAI was created by Ettore Di Giacinto and is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome!
See the examples on how to integrate LocalAI with other popular projects:
ChatGPT OSS alternative | Image generation |
---|---|
Telegram bot | Flowise |
---|---|
See the Getting started and examples sections to learn how to use LocalAI. For a list of curated models check out the model gallery.
- π₯π₯π₯ 19-06-2023: v1.19.0: CUDA support! Release notes Changelog
- π₯π₯π₯ 06-06-2023: v1.18.0: Many updates, new features, and much more π, check out the Release notes!
- 29-05-2023: LocalAI now has a website, https://localai.io! check the news in the dedicated section!
For latest news, follow also on Twitter @LocalAI_API and @mudler_it
To help the project you can:
-
Hacker news post - help us out by voting if you like this project.
-
If you have technological skills and want to contribute to development, have a look at the open issues. If you are new you can have a look at the good-first-issue and help-wanted labels.
-
If you don't have technological skills you can still help improving documentation or add examples or share your user-stories with our community, any help and contribution is welcome!
Check out the Getting started section. Here below you will find generic, quick instructions to get ready and use LocalAI.
The easiest way to run LocalAI is by using docker-compose
(to build locally, see building LocalAI):
git clone https://github.com/go-skynet/LocalAI
cd LocalAI
# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>
# copy your models to models/
cp your-model.bin models/
# (optional) Edit the .env file to set things like context size and threads
# vim .env
# start with docker-compose
docker-compose up -d --pull always
# or you can build the images with:
# docker-compose up -d --build
# Now API is accessible at localhost:8080
curl https://localhost:8080/v1/models
# {"object":"list","data":[{"id":"your-model.bin","object":"model"}]}
curl https://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
"model": "your-model.bin",
"prompt": "A long time ago in a galaxy far, far away",
"temperature": 0.7
}'
# Clone LocalAI
git clone https://github.com/go-skynet/LocalAI
cd LocalAI
# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>
# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
# Use a template from the examples
cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/
# (optional) Edit the .env file to set things like context size and threads
# vim .env
# start with docker-compose
docker-compose up -d --pull always
# or you can build the images with:
# docker-compose up -d --build
# Now API is accessible at localhost:8080
curl https://localhost:8080/v1/models
# {"object":"list","data":[{"id":"ggml-gpt4all-j","object":"model"}]}
curl https://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "ggml-gpt4all-j",
"messages": [{"role": "user", "content": "How are you?"}],
"temperature": 0.9
}'
# {"model":"ggml-gpt4all-j","choices":[{"message":{"role":"assistant","content":"I'm doing well, thanks. How about you?"}}]}
In order to build the LocalAI
container image locally you can use docker
:
# build the image
docker build -t localai .
docker run localai
Or you can build the binary with make
:
make build
See the build section in our documentation for detailed instructions.
LocalAI can be installed inside Kubernetes with helm. See installation instructions.
See the list of the supported API endpoints and how to configure image generation and audio transcription.
See the FAQ section for a list of common questions.
Feel free to open up a PR to get your project listed!
- Mimic OpenAI API (mudler#10)
- Binary releases (mudler#6)
- Upstream our golang bindings to llama.cpp (ggerganov/llama.cpp#351)
- Upstream gpt4all bindings
- Multi-model support
- Have a webUI!
- Allow configuration of defaults for models.
- Support for embeddings
- Support for audio transcription with https://github.com/ggerganov/whisper.cpp
- GPU/CUDA support ( mudler#69 )
- Enable automatic downloading of models from a curated gallery, with only free-licensed models, directly from the webui.
LocalAI is a community-driven project created by Ettore Di Giacinto.
MIT
Ettore Di Giacinto and others
LocalAI couldn't have been built without the help of great software already available from the community. Thank you!
- llama.cpp
- https://github.com/tatsu-lab/stanford_alpaca
- https://github.com/cornelk/llama-go for the initial ideas
- https://github.com/antimatter15/alpaca.cpp
- https://github.com/EdVince/Stable-Diffusion-NCNN
- https://github.com/ggerganov/whisper.cpp
- https://github.com/saharNooby/rwkv.cpp