Starred repositories
[ ISPRS ] Hierarchical Loop-based Multiview Registration Framework
Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions (ICCV 2023)
Code release for NeRF (Neural Radiance Fields)
A curated list of awesome neural radiance fields papers
OpenMMLab Detection Toolbox and Benchmark
CVPR2023-Occupancy-Prediction-Challenge
Visual-Inertial-Leg Odometry For Legged Robots
This repository contains implementations and illustrative code to accompany DeepMind publications
A large-scale benchmark and learning environment.
Google Research
Code and models for the paper "One Transformer Fits All Distributions in Multi-Modal Diffusion"
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Robot kinematics implemented in pytorch
ROS-Industrial Universal Robots support (https://wiki.ros.org/universal_robot)
Dual UR5 Husky Robot MuJoCo Model
Code Repository for GenDexGrasp: Generalizing Dexterous Grasping across Robotic Hands via Contact Map Matching
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
基于cassie-mujoco-sim,参考gym-cassie改的一个cassie行走仿真测试例子
mc-rtc FSM controller to demonstrate biped walking and object grasping
A MuJoCo/Gym environment for robot control using Reinforcement Learning. The task of agents in this environment is pixel-wise prediction of grasp success chances.
The source code of paper " Learning High-DOF Reaching-and-Grasping via Dynamic Representation of Gripper-Object Interaction"
A collection of high-quality models for the MuJoCo physics engine, curated by Google DeepMind.