Starred repositories
ROSA 🤖 is an AI Agent designed to interact with ROS1- and ROS2-based robotics systems using natural language queries. ROSA helps robot developers inspect, diagnose, understand, and operate robots.
A robust LiDAR Odometry and Mapping (LOAM) package for Livox-LiDAR
FAST-LIVO2: Fast, Direct LiDAR-Inertial-Visual Odometry
Visual Inertial Odometry (VIO) / Simultaneous Localization & Mapping (SLAM) using iSAM2 framework from the GTSAM library.
Denoising Diffusion Probabilistic Models
A collection of pre-trained, state-of-the-art models in the ONNX format
Visualizer for neural network, deep learning and machine learning models
ViPlanner: Visual Semantic Imperative Learning for Local Navigation
Efficient and parallel algorithms for point cloud registration [C++, Python]
Performance benchmarking for NVIDIA-accelerated Isaac ROS packages
NVIDIA-accelerated, deep learned depth segmentation and obstacle field ranging using Bi3D
Deep learned, NVIDIA-accelerated 3D object pose estimation
NVIDIA-accelerated, deep learned model support for image space object detection
NVIDIA-accelerated, deep learned semantic image segmentation
NVIDIA-accelerated, deep learned stereo disparity estimation
NVIDIA-accelerated Apriltag detection and pose estimation.
VDA5050-compatible mission controller
VDA5050-compatible cloud service for fleet mission dispatch
Wild Visual Navigation: A system for fast traversability learning via pre-trained models and online self-supervision
NVIDIA-accelerated 3D scene reconstruction and Nav2 local costmap provider using nvblox
NVIDIA Isaac Transport for ROS package for hardware-acceleration friendly movement of messages
The hub for EleutherAI's work on interpretability and learning dynamics
Official implementation for CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
[ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"