Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
-
Updated
Jun 13, 2024 - Python
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Database system for AI-powered apps
TensorFlow template application for deep learning
DELTA is a deep learning based natural language and speech processing platform.
RayLLM - LLMs on Ray
A unified end-to-end machine intelligence platform
Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
Lineage metadata API, artifacts streams, sandbox, API, and spaces for Polyaxon
ML pipeline orchestration and model deployments on Kubernetes.
MLModelCI is a complete MLOps platform for managing, converting, profiling, and deploying MLaaS (Machine Learning-as-a-Service), bridging the gap between current ML training and serving systems.
ClearML - Model-Serving Orchestration and Repository Solution
Deploy DL/ ML inference pipelines with minimal extra code.
Friendli: the fastest serving engine for generative AI
A REST API for vLLM, production ready
AsyncIO serving for data science models
Python library for Modzy Machine Learning Operations (MLOps) Platform
Serve your fastText models for text classification and word vectors
Add a description, image, and links to the serving topic page so that developers can more easily learn about it.
To associate your repository with the serving topic, visit your repo's landing page and select "manage topics."