Block or Report
Block or report drawtea1234
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLists (1)
Sort Name ascending (A-Z)
Stars
Language: Python
Sort by: Most stars
Python sample codes for robotics algorithms.
Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
Object detection, 3D detection, and pose estimation using center point detection:
BoxMOT: pluggable SOTA tracking modules for segmentation, object detection and pose estimation models
OpenMMLab Pose Estimation Toolbox and Benchmark.
Code release for "Masked-attention Mask Transformer for Universal Image Segmentation"
Papers and Datasets about Point Cloud.
detrex is a research platform for DETR-based object detection, segmentation, pose estimation and other visual recognition tasks.
SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM (CVPR 2024)
gradslam is an open source differentiable dense SLAM library for PyTorch
[CVPR'24 Highlight & Best Demo Award] Gaussian Splatting SLAM
Gaussian-SLAM: Photo-realistic Dense SLAM with Gaussian Splatting
Official implementation of the paper "LangSplat: 3D Language Gaussian Splatting" [CVPR2024 Highlight]
A python toolkit for parsing captions (in natural language) into scene graphs (as symbolic representations).
An open, modular framework for zero-shot, language conditioned pick-and-drop tasks in arbitrary homes.
GRUtopia: Dream General Robots in a City at Scale
[ICRA2023] Implementation of Visual Language Maps for Robot Navigation
Deep RL for MPC control of Quadruped Robot Locomotion
[CVPR2024] OneFormer3D: One Transformer for Unified Point Cloud Segmentation
The data skeleton from "3D Scene Graph: A Structure for Unified Semantics, 3D Space, and Camera" http:https://3dscenegraph.stanford.edu
This is official repo for ICLR 2023 Paper "DENSE RGB SLAM WITH NEURAL IMPLICIT MAPS"
[UNMAINTAINED] Symbolic Framework for Modeling and Identification of Robot Dynamics
[TPAMI 2024] Official repo of "ETPNav: Evolving Topological Planning for Vision-Language Navigation in Continuous Environments"
The repository provides code associated with the paper VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation (ICRA 2024)
Repository for automatic classification and labeling of Urban PointClouds using data fusion and region growing techniques.
Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation