Stars
An implementation of Depthflow in ComfyUI
Efficient Triton Kernels for LLM Training
An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.
A collection of YAML files, Helm Charts, Operator code, and guides to act as an example reference implementation for NVIDIA NIM deployment.
Scalable data pre processing and curation toolkit for LLMs
NVIDIA Data Center GPU Manager (DCGM) is a project for gathering telemetry and measuring the health of NVIDIA GPUs
Power management for KubeVirt virtual machines through IPMI
A tool that help you pick time with friends. 一款幫助您和朋友喬時間的工具。
Reference implementations of MLPerf™ inference benchmarks
State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.
An NVIDIA AI Workbench example project for fine-tuning a Mistral 7B model
A high-throughput and memory-efficient inference and serving engine for LLMs
Run cloud native workloads on NVIDIA GPUs
Cloud-Native distributed storage built on and for Kubernetes
Ansible automation for Authentication Services
Chrome extension to return youtube dislikes
Security automation content in SCAP, Bash, Ansible, and other formats
Resources, demos, recipes,... to work with LLMs on OpenShift with OpenShift AI or Open Data Hub.
Robust Speech Recognition via Large-Scale Weak Supervision
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.