Stars
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
Qdrant - High-performance, massive-scale Vector Database and Vector Search Engine for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
A runtime for writing reliable asynchronous applications with Rust. Provides I/O, networking, scheduling, timers, ...
π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Modern columnar data format for ML and LLMs implemented in Rust. Convert from parquet in 2 lines of code for 100x faster random access, vector index, and data versioning. Compatible with Pandas, Duβ¦
βποΈ The minimal, blazing-fast, and infinitely customizable prompt for any shell!
AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
Diffusers-Interpret π€π§¨π΅οΈββοΈ: Model explainability for π€ Diffusers. Get explanations for your generated images.
A modern replacement for Redis and Memcached
Sui, a next-generation smart contract platform with high throughput, low latency, and an asset-oriented programming model powered by the Move programming language
Quick demo of a REST frontend with a Redis session store.
Vim-fork focused on extensibility and usability
Blazing-fast query execution engine speaks Apache Spark language and has Arrow-DataFusion at its core.
Bring projects, wikis, and teams together with AI. AppFlowy is an AI collaborative workspace where you achieve more without losing control of your data. The best open source alternative to Notion.
Flutter/Dart <-> Rust binding generator, feature-rich, but seamless and simple.
Sample serverless application written in Rust