Skip to content

Retrieval Augmented Generation of Bhagavad Gita Books using Mistral7b and FAISS vector database, Google Colab (free T4 GPU)

Notifications You must be signed in to change notification settings

shrimantasatpati/Mistral7b-RAG-Gita_Books

Repository files navigation

Mistral7b-Bhagavad-Gita-RAG-AI-Bot

🐣 Please follow me for new updates https://github.com/shrimantasatpati

🚦 WIP 🚦

Deployments coming soon!

Technology Stack

  1. FAISS - Vector database
  2. Google Colab - Development/ Inference using T4 GPU
  3. Gradio - Web UI, inference using free-tier Colab T4 GPU
  4. HuggingFace - Transformer, Sentence transformers (for creating vector embeddings), Mistral7b quantized model
  5. LangChain - Retrieval augmented generation (RAG) using RetrievalQA chain functionality

🦒 Colab

Colab Info
Open In Colab Creating FAISS vector database from Kaggle dataset
Open In Colab Mistral7b (4bit) RAG Inference of Bhagavad Gita using Gradio
  • Store the vector database in your Google Drive in the following format "vectorstore/db_faiss". The db_faiss contains the following: index.faiss and index.pkl.
  • Mount the Google Drive to load the vector embeddings for inference. Mistral7b (4bit) RAG Inference of Bhagavad Gita using Gradio
  • Using BitandBytes configurations (load_in_4bit) for quantization - A bit loss in precision, but performance is almost at par with the Mistral7b (base) model.
  • HuggingFace pipeline for "text-generation".
  • AutoTokenizer and AutoModelforCasualLM from "transformers" for tokenization and loading Mistral7b model from HuggingFace Spaces.

Dataset

FAISS vector embeddings

Main Repo

https://github.com/mistralai/mistral-src

Paper/ Website

Output

Image1

Image2

Contributor

Shrimanta Satpati

About

Retrieval Augmented Generation of Bhagavad Gita Books using Mistral7b and FAISS vector database, Google Colab (free T4 GPU)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages