Skip to content

This is a script to locally build Llama3 on NAS

Notifications You must be signed in to change notification settings

ansonhex/Llama3-NAS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

Running Llama3 on NAS

This is a simple instruction on how to locally run Llama3 on your NAS using Open WebUI and Ollama.

Hardware

Setup Instructions

1. Create Necessary Directories

Open a terminal and run the following commands to create directories for your AI models and the web interface:

$ mkdir -p /home/ansonhe/AI/ollama
$ mkdir -p /home/ansonhe/AI/Llama3/open-webui

2. Deploy Using Docker

Copy the provided docker-compose.yml file to your directory, then start the services using Docker:

$ sudo docker-compose up -d

3. Access the Web Interface

Open web browser and enter YOUR_NAS_IP:8080

4. Configure Ollama Models

In the web interface, change the Ollama Models address to 0.0.0.0:11434.

config

5. Download the Llama3 Model

Download the Llama3 Model ollama run llama3:8b, (Only using 8b versions here since performance limitations on NAS)

download

6. Set Llama3 as the Default Model

After the model has been downloaded, you can select Llama3 as your default model and host it locally.

llama3

Performance Considerations

Please note that the CPU of the NAS may still face limitations when running large language models (LLMs) due to its processing capabilities.

About

This is a script to locally build Llama3 on NAS

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published