Skip to content

Retrieval augmented generation demos with open-source Llama-2 / Mistral / Zephyr / Gemma

License

Notifications You must be signed in to change notification settings

truongnn1106/retrieval-augmented-generation

 
 

Repository files navigation

retrieval-augmented-generation

Retrieval augmented generation (RAG) demos with Llama-2-7b, Mistral-7b, Zephyr-7b, Gemma

The demos use quantized models and run on CPU with acceptable inference time. They can run offline without Internet access, thus allowing deployment in an air-gapped environment.

The demos also allow user to

  • apply propositionizer to document chunks
  • perform reranking upon retrieval
  • perform hypothetical document embedding (HyDE)

🔧 Getting Started

You will need to set up your development environment using conda, which you can install directly.

conda env create --name rag -f environment.yaml --force

Activate the environment.

conda activate rag

Download model artefacts

Download and save the models in ./models and update config.yaml. The models used in this demo are:

Add prompt format

Since each model type has its own prompt format, include the format in ./src/prompt_templates.py. For example, the format used in openbuddy models is

_openbuddy_format = """{system}
User: {user}
Assistant:"""

Refer to the file for more details.

💻 App

We use Streamlit as the interface for the demos. There are two demos:

  • Conversational Retrieval
streamlit run app_conv.py
  • Retrieval QA
streamlit run app_qa.py

🔍 Usage

To get started, upload a PDF and click on Build VectorDB. Creating vector DB will take a while.

screenshot

About

Retrieval augmented generation demos with open-source Llama-2 / Mistral / Zephyr / Gemma

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%