- Beijing
- https://ryantd.github.io/
Block or Report
Block or report ryantd
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLanguage: C++
Sort by: Most stars
Starred repositories
Carbon Language's main repository: documents, design, implementation, and related tools. (NOTE: Carbon Language is experimental; see README)
A library for efficient similarity search and clustering of dense vectors.
Cross-platform, customizable ML solutions for live and streaming media.
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences
Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
Development repository for the Triton language and compiler
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
A General-purpose Task-parallel Programming System using Modern C++
Diablo devolved - magic behind the 1996 computer game
High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficie…
A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.
Header-only C++/python library for fast approximate nearest neighbors
MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.
LightSeq: A High Performance Library for Sequence Processing and Generation
Fast inference engine for Transformer models
CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.
FlexFlow Serve: Low-Latency, High-Performance LLM Serving
a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.