forked from phidatahq/phidata
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add RAG with LanceDB knowledge and SQLite storage.
Configured Ollama language model and LanceDb vector database Created PDFUrlKnowledgeBase with LanceDb storage Loaded existing knowledge base and set up SQL assistant storage Initialized Assistant with configurations for user interaction Enabled response generation in Markdown format for queries
- Loading branch information
1 parent
a6db51c
commit de46708
Showing
2 changed files
with
84 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,29 @@ | ||
# RAG Assistant With LanceDB and SQLite | ||
|
||
> Fork and clone the repository if needed. | ||
## 1. Setup Ollama models | ||
|
||
```shell | ||
ollama pull llama3:8b | ||
ollama pull nomic-embed-text | ||
``` | ||
|
||
## 2. Create a virtual environment | ||
|
||
```shell | ||
python3 -m venv ~/.venvs/aienv | ||
source ~/.venvs/aienv/bin/activate | ||
``` | ||
|
||
## 3. Install libraries | ||
|
||
```shell | ||
!pip install -U phidata ollama lancedb pandas sqlalchemy | ||
``` | ||
|
||
## 4. Run RAG Assistant | ||
|
||
```shell | ||
python cookbook/examples/rag_with_lance_and_sqllite/assistant.py | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,55 @@ | ||
# Import necessary modules from the phi library | ||
from phi.knowledge.pdf import PDFUrlKnowledgeBase | ||
from phi.vectordb.lancedb.lancedb import LanceDb | ||
from phi.embedder.ollama import OllamaEmbedder | ||
from phi.assistant import Assistant | ||
from phi.storage.assistant.sqllite import SqlAssistantStorage | ||
from phi.llm.ollama import Ollama | ||
|
||
# Define the database URL where the vector database will be stored | ||
db_url = "/tmp/lancedb" | ||
|
||
# Configure the language model | ||
llm = Ollama(model="llama3:8b", temperature=0.0) | ||
|
||
# Create Ollama embedder | ||
embedder = OllamaEmbedder(model="nomic-embed-text", dimensions=768) | ||
|
||
# Create the vectore database | ||
vector_db = LanceDb( | ||
table_name = "recipes", # Table name in the vectore database | ||
uri = db_url, # Location to initiate/create the vector database | ||
embedder=embedder # Without using this, it will use OpenAI embeddings by default | ||
) | ||
|
||
# Create a knowledge base from a PDF URL using LanceDb for vector storage and OllamaEmbedder for embedding | ||
knowledge_base = PDFUrlKnowledgeBase( | ||
urls = ["https://phi-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"], | ||
vector_db = vector_db, | ||
) | ||
|
||
# Load the knowledge base without recreating it if it already exists in Vectore LanceDB | ||
knowledge_base.load(recreate=False) | ||
# assistant.knowledge_base.load(recreate=False) # You can also use this to load a knowledge base after creating assistant | ||
|
||
# Set up SQL storage for the assistant's data | ||
storage = SqlAssistantStorage(table_name='recipies', db_file='data.db') | ||
storage.create() # Create the storage if it doesn't exist | ||
|
||
# Initialize the Assistant with various configurations including the knowledge base and storage | ||
assistant = Assistant( | ||
run_id='run_id', # use any unique identifier to identify the run | ||
user_id='user', # user identifier to identify the user | ||
llm=llm, | ||
knowledge_base=knowledge_base, | ||
storage=storage, | ||
tool_calls=True, # Enable function calls for searching knowledge base and chat history | ||
use_tools=True, | ||
show_tool_calls=True, | ||
search_knowledge=True, | ||
add_references_to_prompt=True, # Use traditional RAG (Retrieval-Augmented Generation) | ||
debug_mode=True # Enable debug mode for additional information | ||
) | ||
|
||
# Use the assistant to generate and print a response to a query, formatted in Markdown | ||
assistant.print_response("What is the first step of making Gluai Buat Chi from the knowledge base?", markdown=True) |