humanml3d
Pytorch implementation of Unimotion: Unifying 3D Human Motion Synthesis and Understanding.
(SIGGRAPH 2024) Official repository for "Taming Diffusion Probabilistic Models for Character Control"
The official implementation of the paper "MAS: Multiview Ancestral Sampling for 3D Motion Generation Using 2D Diffusion"
Official code for the paper "Monkey See, Monkey Do: Harnessing Self-attention in Motion Diffusion for Zero-shot Motion Transfer"
Official implementation for "Generating Diverse and Natural 3D Human Motions from Texts (CVPR2022)."
FineDance: A Fine-grained Choreography Dataset for 3D Full Body Dance Generation. (ICCV2023)
Official repo for paper "MotionCraft: Crafting Whole-Body Motion with Plug-and-Play Multimodal Controls"
Code for paper "RapVerse: Coherent Vocals and Whole-Body Motions Generations from Text"
[Arxiv-2024] MotionLLM: Understanding Human Behaviors from Human Motions and Videos
Official repo of "Motion-Agent: A Conversational Framework for Human Motion Generation with LLMs"
Official implementation for "Exploring Vision Transformers for 3D Human Motion-Language Models with Motion Patches" (CVPR 2024)
SATO: Stable Text-to-Motion Framework
Implementation of WheelPose presented at ACM CHI 2024.
[TCSVT 2024] Official PyTorch implementation of the paper "MLP: Motion Label Prior for Temporal Sentence Localization in Untrimmed 3D Human Motions"
[CVPRW 2024] Official Implementation of "in2IN: Leveraging individual Information to Generate Human INteractions".
Official Implementation of the Paper: Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation (ACMMM 2024)
Official implementation of CVPR24 highlight paper "Move as You Say, Interact as You Can: Language-guided Human Motion Generation with Scene Affordance"
Official implementation of the NeurIPS22 paper "HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes"
ECCV 2024: Controllable Motion Generation through Language Guided Pose Code Editing
Implementation of "Multi-Track Timeline Control for Text-Driven 3D Human Motion Generation" from CVPR Workshop on Human Motion Generation 2024.
Freetalker: Controllable Speech and Text-Driven Gesture Generation Based on Diffusion Models for Enhanced Speaker Naturalness (ICASSP 2024)
Plan, Posture and Go: Towards Open-World Text-to-Motion Generation
DNO: Optimizing Diffusion Noise Can Serve As Universal Motion Priors
(CVPR 2023) Pytorch implementation of “T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations”