Lists (10)
Sort Name ascending (A-Z)
Stars
Official repo of "MotionLLM: Multimodal Motion-Language Learning with Large Language Models"
Pytorch implementation of Unimotion: Unifying 3D Human Motion Synthesis and Understanding.
[IJCV 2024] InterGen: Diffusion-based Multi-human Motion Generation under Complex Interactions
[ECCV 2024] Official Implementation of "CoMusion: Towards Consistent Stochastic Human Motion Prediction via Motion Diffusion".
Official Pytorch Implementation of Synthesizing Coherent Story with Auto-Regressive Latent Diffusion Models
Code for paper "Learning Semantic Latent Directions for Accurate and Controllable Human Motion Prediction" (ECCV 2024)
Official Code for ACM SIGGRAPH 2024 paper "Ultra Inertial Poser: Scalable Motion Capture and Tracking from Sparse Inertial Sensors and Ultra-Wideband Ranging"
Code for "GVHMR: World-Grounded Human Motion Recovery via Gravity-View Coordinates", Siggraph Asia 2024
The official Pytorch implementation of “BAD: Bidirectional Auto-regressive Diffusion for Text-to-Motion Generation”
We introduce LT3SD, a novel latent 3D scene diffusion approach enabling high-fidelity generation of infinite 3D environments in a patch-by-patch and coarse-to-fine fashion.
[TCSVT 2024] Official PyTorch implementation of the paper "MLP: Motion Label Prior for Temporal Sentence Localization in Untrimmed 3D Human Motions"
HumanML3D: A large and diverse 3d human motion-language dataset.
Interactive Character Control with Auto-Regressive Motion Diffusion Models
Codes for ICASSP 2024 paper: BEAST: Online Joint Beat and Downbeat Tracking Based on Streaming Transformer. An online beat tracking system based on streaming Transformer
MotionFix: Text-Driven 3D Human Motion Editing [SIGGRAPH ASIA 2024]
[ACM MM 2024 oral] The official repo for the paper "HeroMaker: Human-centric Video Editing with Motion Priors".
Official Implementation of SIGGRAPH Asia 2023 (TOG) Paper: Object Motion Guided Human Motion Synthesis
[NCAA] Official implementation of the paper Motion2Language, Unsupervised learning of synchronized semantic motion segmentation
[ECCV 2024] IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation
Official code for the paper "Monkey See, Monkey Do: Harnessing Self-attention in Motion Diffusion for Zero-shot Motion Transfer"
[CoRL'24] Dynamics-Guided Diffusion Model for Robot Manipulator Design
MADiff: Motion-Aware Mamba Diffusion Models for Hand Trajectory Prediction on Egocentric Videos
collection of diffusion model papers categorized by their subareas