This FastAPI web application leverages Open source LLMs (Large Language Model) using Ollama and LlamaIndex to build a RAG (Retrieval-Augmented Generation) framework using user notes as knowledge base. It provides endpoints for interacting with the knowledge base to get responses to user queries.
-
FastAPI
-
Ollama
-
LlamaIndex
- Chat with LLM Endpoint: Use this endpoint to interact with Open Source LLM using Ollama by providing prompts and receiving responses.
- Endpoint:
http:https://localhost:8000/chat_with_llm?prompt=your_prompt_here
- Endpoint:
- Chat with Knowledge Base Endpoint: Use this endpoint to query the knowledge base built using LlamaIndex and retrieve relevant information.
- Endpoint:
http:https://localhost:8000/chat_with_knowledge_base?prompt=your_prompt_here
- Endpoint:
-
Clone the repository:
git clone https://github.com/antoprince001/notes_scribe.git cd notes_scribe
-
Install dependencies:
pip install -r requirements.txt
-
Add your notes file to the 'data/notes' directory
-
Run test suite:
pytest
-
Run the FastAPI server:
uvicorn main:app --reload
-
Open your web browser and navigate to the provided endpoints to interact with the REST API.