Skip to content

skrawcz/super-rag

 
 

Repository files navigation

Super-Rag

Super-performant RAG pipeline for AI apps.


FeaturesInstallationHow to useCloud API

✅ Key features

  • Supports multiple document formats and vector databases.
  • Provides a production ready REST API.
  • Customizable splitting/chunking.
  • Includes options for encoding data using different encoding models both propriatory and open source.
  • Built in code interpreter mode for computational question & answer scenarios.
  • Allows session management through unique IDs for caching purposes.

☁️ Cloud API

Easiset way to get started is to use our Cloud API. This API is free to use (within reasonable limits).

📦 Installation

  1. Clone the repository

    git clone https://github.com/superagent-ai/super-rag 
    cd super-rag 
  2. Setup virtual environment

    # Using virtualenv 
    virtualenv env 
    source env/bin/activate 
    
    # Or using venv 
    python3 -m venv env 
    source env/bin/activate 
  3. Install requried packages

    poetry install
  4. Rename .env.example to .env and set your environment variables

  5. Run server

    uvicorn main:app --reload

🤖 Interpreter mode

Super-Rag has built in support for running computational Q&A using code interpreters powered by E2B.dev custom runtimes. You can signup to receive an API key to leverage they sandboxes in a cloud environment or setup your own by following these instructions.

🚀 How to use

Super-Rag comes with a built in REST API powered by FastApi.

Ingest documents

// POST: /api/v1/ingest

// Payload
{
    "files": [
        {
            "name": "My file", // Optional
            "url": "https://path-to-my-file.pdf"
        }
    ],
    "document_processor": { // Optional
        "encoder": {
            "dimensions": 384,
            "model_name": "embed-multilingual-light-v3.0",
            "provider": "cohere"
        },
        "unstructured": {
            "hi_res_model_name": "detectron2_onnx",
            "partition_strategy": "auto",
            "process_tables": false
        },
        "splitter": {
            "max_tokens": 400,
            "min_tokens": 30,
            "name": "semantic",
            "prefix_summary": true,
            "prefix_title": true,
            "rolling_window_size": 1
        }
    },
    "vector_database": {
        "type": "qdrant",
        "config": {
            "api_key": "YOUR API KEY",
            "host": "THE QDRANT HOST"
        }
    },
    "index_name": "my_index",
    "webhook_url": "https://my-webhook-url"
}

Query documents

// POST: /api/v1/query

// Payload
{
    "input": "What is ReAct",
    "vector_database": {
            "type": "qdrant",
            "config": {
            "api_key": "YOUR API KEY",
            "host": "THE QDRANT HOST"
        }
        },
    "index_name": "YOUR INDEX",
    "interpreter_mode": true,
    "encoder": {
        "provider": "cohere",
        "name": "embed-multilingual-light-v3.0",
        "dimensions": 384
    },
    "exclude_fields": ["metadata"], // Exclude specific fields
    "interpreter_mode": False, // Set to True if you wish to run computation Q&A with a code interpreter
    "session_id": "my_session_id" // keeps micro-vm sessions and enables caching 
}

Delete document

// POST: /api/v1/delete

// Payload
{
    "file_url": "A file url to delete",
    "vector_database": {
        "type": "qdrant",
        "config": {
            "api_key": "YOUR API KEY",
            "host": "THE QDRANT HOST"
        }
    },
    "index_name": "my_index",
}

🧠 Supportd encoders

  • OpenAi
  • Cohere
  • HuggingFace
  • FastEmbed

🗃 Supported vector databases

  • Weaviate
  • Qdrant
  • Weaviate
  • Astra
  • Supabase (coming soon)

About

Super performant RAG pipeline for AI apps.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 74.2%
  • Jupyter Notebook 24.0%
  • Dockerfile 1.4%
  • Makefile 0.4%