Using Ollama in LobeChat

Using Ollama in LobeChat

Ollama is a powerful framework for running large language models (LLMs) locally, supporting various language models including Llama 2, Mistral, and more. Now, LobeChat supports integration with Ollama, meaning you can easily enhance your application by using the language models provided by Ollama in LobeChat.

This document will guide you on how to use Ollama in LobeChat:

Using Ollama on macOS

Local Installation of Ollama

Download Ollama for macOS and unzip/install it.

Configure Ollama for Cross-Origin Access

Due to Ollama's default configuration, which restricts access to local only, additional environment variable setting OLLAMA_ORIGINS is required for cross-origin access and port listening. Use launchctl to set the environment variable:

bash
launchctl setenv OLLAMA_ORIGINS "*"

After setting up, restart the Ollama application.

Conversing with Local Large Models in LobeChat

Now, you can start conversing with the local LLM in LobeChat.

Chat with llama3 in LobeChat

Using Ollama on Windows

Local Installation of Ollama

Download Ollama for Windows and install it.

Configure Ollama for Cross-Origin Access

Since Ollama's default configuration allows local access only, additional environment variable setting OLLAMA_ORIGINS is needed for cross-origin access and port listening.

On Windows, Ollama inherits your user and system environment variables.

  1. First, exit the Ollama program by clicking on it in the Windows taskbar.
  2. Edit system environment variables from the Control Panel.
  3. Edit or create the Ollama environment variable OLLAMA_ORIGINS for your user account, setting the value to *.
  4. Click OK/Apply to save and restart the system.
  5. Run Ollama again.

Conversing with Local Large Models in LobeChat

Now, you can start conversing with the local LLM in LobeChat.

Using Ollama on Linux

Local Installation of Ollama

Install using the following command:

bash
curl -fsSL https://ollama.com/install.sh | sh

Alternatively, you can refer to the Linux manual installation guide.

Configure Ollama for Cross-Origin Access

Due to Ollama's default configuration, which allows local access only, additional environment variable setting OLLAMA_ORIGINS is required for cross-origin access and port listening. If Ollama runs as a systemd service, use systemctl to set the environment variable:

  1. Edit the systemd service by calling sudo systemctl edit ollama.service:
bash
sudo systemctl edit ollama.service
  1. Add Environment under [Service] for each environment variable:
bash
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
  1. Save and exit.
  2. Reload systemd and restart Ollama:
bash
sudo systemctl daemon-reload
sudo systemctl restart ollama

Conversing with Local Large Models in LobeChat

Now, you can start conversing with the local LLM in LobeChat.

Deploying Ollama using Docker

Pulling Ollama Image

If you prefer using Docker, Ollama provides an official Docker image that you can pull using the following command:

docker pull ollama/ollama

Configure Ollama for Cross-Origin Access

Since Ollama's default configuration allows local access only, additional environment variable setting OLLAMA_ORIGINS is needed for cross-origin access and port listening.

If Ollama runs as a Docker container, you can add the environment variable to the docker run command.

bash
docker run -d --gpus=all -v ollama:/root/.ollama -e OLLAMA_ORIGINS="*" -p 11434:11434 --name ollama ollama/ollama

Conversing with Local Large Models in LobeChat

Now, you can start conversing with the local LLM in LobeChat.

Installing Ollama Models

Ollama supports various models, which you can view in the Ollama Library and choose the appropriate model based on your needs.

Installation in LobeChat

In LobeChat, we have enabled some common large language models by default, such as llama3, Gemma, Mistral, etc. When you select a model for conversation, we will prompt you to download that model.

LobeChat guide your to install Ollama model

Once downloaded, you can start conversing.

Pulling Models to Local with Ollama

Alternatively, you can install models by executing the following command in the terminal, using llama3 as an example:

ollama pull llama3

Custom Configuration

You can find Ollama's configuration options in Settings -> Language Models, where you can configure Ollama's proxy, model names, etc.

Ollama Provider Settings

Visit Integrating with Ollama to learn how to deploy LobeChat to meet integration needs with Ollama.