Skip to content

AI models as scalable microservices, enabling evaluation of LLMs and offering end-to-end functions such as chatbot, semantic search, image generation and beyond.

License

Notifications You must be signed in to change notification settings

intelligentnode/IntelliServer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Mentioned in Awesome LLM

IntelliServer

AI models as private microservice - ChatGPT, Cohere, Llama, Stability, Hugging inference and more.

Intelliserver is a microservice providing unified access to multiple AI models, allowing you to easily integrate cutting-edge AI into your project.

Run in Postman

Core Services

  • Chatbot: chatbot functionalities using popular models like ChatGPT, Llama, and AWS Sagemaker models.
  • LLM Evaluation: evaluate different AI models to choose the optimal solution for your requirements.
  • Semantic Search: leverage context-aware semantic search capabilities across text documents.
  • Image Generation: generate quality images based on described contexts using diffusion image models.
  • Chat Context: get the relevant messages for the chatbot conversation.
  • Parsers: convert documents to text such as PDF and word.
  • OCR: extract text from images using AWS or Google vision.

Installation

Repository Setup

Instructions to run the microservice from GitHub repo:

npm

cd intelliserver
npm install
npm start

docker

# docker run
docker build -t intelliserver:latest .
docker run -p 80:80 intelliserver:latest

# or docker compose run
docker-compose up

Release (Docker Hub)

To pull the release image from docker hub:

docker pull intellinode/intelliserver:latest

Run IntelliServer

# run with custom keys
API_KEY=<YOUR_API_KEY>
ADMIN_KEY=<YOUR_ADMIN_KEY>
docker run -p 80:80 -e API_KEY=$API_KEY -e ADMIN_KEY=$ADMIN_KEY intellinode/intelliserver:latest

# or run with the default key - only for testing
docker run -p 80:80 intellinode/intelliserver:latest

Mac M-series processors

For Mac M-series users, pull the arm64 version:

docker pull intellinode/intelliserver:arm64

Testing

To test Intelliserver, you can find the endpoints collection in the postman repository.

Or access the swagger interactive docs: localhost/api-docs/

To customize the default keys and settings, create a .env file inside intelliserver cloned repo with the following values:

# api keys
API_KEY=<key>
ADMIN_KEY=<key>

# models keys - if not added, the user should send the key in the API call
OPENAI_API_KEY=<key>
AZURE_OPENAI_API_KEY=<key>
COHERE_API_KEY=<key>
GOOGLE_API_KEY=<key>
STABILITY_API_KEY=<key>
HUGGING_API_KEY=<key>
REPLICATE_API_KEY=<key>

Key Benefits

  • Unified Access: Intelliserver provides a unified API for accessing different AI models. This allows for seamless switching between models using the same endpoint format.

  • Scalability: Intelliserver utilizes microservices architecture, allowing the AI middleware to run as an independent service with dedicated resources.

  • Model Evaluation: Intelliserver's design allows for seamless evaluation and comparison of different AI models using unified service. This facilitates data-driven decision when selecting the optimal model for specific use cases.

License

Intelliserver is released under the MIT License