Skip to content

Tritium-chuan/Chat-bot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Chat-bot

by Tritium-chuan

Abstract

Chat-bot is a chat robot that can be run entirely locally. Speech-to-text model Whisper, large-language model llama, and text-to-speech model PaddleSpeech are ultilized in this project.

Supported Platform

Chat-bot can only be comipled and run in Apple macOS.

Make sure that the platform has an Apple M series chip with at least 16 GB unified memory.

The test platform is MacBook Pro (14 inch, late 2023) with M3 Pro chip (11 cpu cores, 14 gpu cores, and 18 GB unified memory).

Installation

Create Environment

Install Anaconda.

Create environment: Chatbot.

conda create -n Chatbot python=3.8.18

You can also create conda environment according to the configuration file.

conda env create -f conda-env-chatbot.yaml

Activate environment: Chatbot.

conda activate Chatbot

Install PaddlePaddle.

Install opencc and ffmpeg.

File Setup

Create Chat-bot-dir.

mkdir Chat-bot-dir
cd Chat-bot-dir
mkdir llama-models
mkdir whisper-models

Clone the project.

git clone https://github.com/Tritium-chuan/Chat-bot.git

Compile Chat-bot using g++.

Compile main.cpp and main-ch.cpp.

Clone Whisper.cpp, llama.cpp, and PaddleSpeech.

git clone https://github.com/ggerganov/whisper.cpp
git clone https://github.com/ggerganov/llama.cpp
git https://github.com/PaddlePaddle/PaddleSpeech

Complie Whisper.cpp and llama.cpp using make.

Read README.md files in Whisper.cpp and llama.cpp. Make sure that the server of Whisper.cpp and server of llama.cpp is compiled and ready to use.

Install PaddleSpeech.

cd PaddleSpeech
pip install pytest-runner
pip install .

Model Download

English llama models can be downloaded from Meta.

Chinese llama models can be downloaded here.

Place llama models .gguf files in llama-models folder.

For example, ggml-model-q4_0.gguf file of 7B-chat model should be placed in Chat-bot-dir/llama-models/7B-chat folder; ggml-model-q4_0.gguf file of 7B-Chinese model should be placed in Chat-bot-dir/llama-models/7B-ch folder.

Whisper models can be downloaded here.

Place whisper models .bin files in whisper-models folder.

To ensure accurate recognition of Chinese speech, large model is needed.

For example, ggml-large-v1.bin should be placed in Chat-bot-dir/whisper-models folder.

PaddleSpeech models will be downloaded automatically when using Chat-bot.

Check Chat-bot-dir.

.
├── Chat-bot
│
├── PaddleSpeech
│
├── llama-cpp
│
├── llama-models
│   ├── 7B-ch
│   └── 7B-chat
├── whisper-cpp
│
└── whisper-models
    └── ggml-large-v1.bin

Usage

Activate Environment

conda activate Chatbot

Chat in Chinese

Start Whisper server.

./server-whisper-ch.sh

Start llama server.

./server-llama-ch.sh

Start PaddleSpeech server.

./server-pds.sh

Start Chat-bot server.

./server-chat-ch.sh

Start main-ch.

./main-ch

Chat in English

Start Whisper server.

./server-whisper.sh

Start llama server.

./server-llama.sh

Start Chat-bot server.

./server-chat.sh

Start main.

./main

Always make sure that the servers are all using different ports. See Note.md for more information.

Check the paths to the models before using. Modify them if you are using different models.

Issues

PaddleSpeech server cannot work properly in English mode.

Acknowledgement

The author would like to thank ChatGPT's assistance in writing codes and reading development documents.

About

Whisper.cpp + llama.cpp + PaddleSpeech

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published