Stars
Code for paper "Multiple Physics Pretraining for Physical Surrogate Models
PyTorch implementation of SAC-Discrete.
PyTorch implementation of discrete version of Soft Actor-Critic.
[RSS 2024] 3D Diffusion Policy: Generalizable Visuomotor Policy Learning via Simple 3D Representations
Octo is a transformer-based robot policy trained on a diverse mix of 800k robot trajectories.
🌟 This is pytorch implemention of mobile architecture (mobilenet and shufflenet)
[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
Paper Collection for Imitation Learning in RL.
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Implementation of HIRO (Data-Efficient Hierarchical Reinforcement Learning)
SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM (CVPR 2024)
pySLAM contains a Visual Odometry (VO) pipeline in Python for monocular, stereo and RGBD cameras. It supports many modern local features based on Deep Learning.
MD-SLAM: Multi-cue Direct SLAM. Implements the first photometric LiDAR SLAM pipeline, that works withouth any explicit geometrical assumption. Universal approach, working independently for RGB-D an…
KinectFusion implemented in Python with PyTorch
Python sample codes for robotics algorithms.
Volumetric-based Contact Point Detection for 7-DoF Grasping
Deep Reinforcement Learning for Robotic Grasping from Octrees
The goal of this project is to implement the paper 'Real-Time Grasp Detection Using Convolutional Neural Networks' by Redmon. This implementation is working progress and it uses Pytorch
This repository contains a sample of the grasping dataset and tools to visualize grasps, generate random scenes, and render observations. The two sample files are in the HDF5 format.
This is the repository for the NBMOD dataset and the code of the paper "NBMOD: Find It and Grasp It in Noisy Background.
Use CNNs to estimate a grasping point and angle of a given object, so that the robot arm can pick the object.
Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020)
Robotic grasp dataset for multi-object multi-grasp evaluation with RGB-D data. This dataset is annotated using the same protocal as Cornell Dataset, and can be used as multi-object extension of Cor…
Detecting robot grasping positions with deep neural networks. The model is trained on Cornell Grasping Dataset. This is an implementation mainly based on the paper 'Real-Time Grasp Detection Using …