-
NVIDIA
- Morristown, NJ
-
19:40
(UTC -04:00) - in/rlizzo
Stars
Original Apollo 11 Guidance Computer (AGC) source code for the command and lunar modules.
NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the effective training time by minimizing the downtime due to fa…
A tool to configure, launch and manage your machine learning experiments.
Tool to wrap installations into a container designed for use on HPC systems
A collection of handy Bash One-Liners and terminal tricks for data processing and Linux system maintenance.
WarpFactory is a numerical toolkit for analyzing warp drive spacetimes.
A massively parallel, high-level programming language
Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors at once; across one or thousands of GPUs.
Large Action Model framework to develop AI Web Agents
Lazy Predict help build a lot of basic models without much code and helps understand which models works better without any parameter tuning
Machine Learning Serving focused on GenAI with simplicity as the top priority.
External Secrets Operator reads information from a third-party service like AWS Secrets Manager and automatically injects the values as Kubernetes Secrets.
A programming language for the cloud ☁️ A unified programming model, combining infrastructure and runtime code into one language ⚡
Platform to experiment with the AI Software Engineer. Terminal based. NOTE: Very different from https://gptengineer.app
Enable your Go applications to self update
A simple, high-throughput file client for mounting an Amazon S3 bucket as a local file system.
Papers from the computer science community to read and discuss.
This is a demo of how you can allow a VM to access the internet "through" an External Load Balancer
⚡️ Automatically add Trace Spans to Go methods and functions
Analyzes resource usage and performance characteristics of running containers.
Reverse proxy that inverts the direction of traffic
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.