Skip to content

Repository for our paper: FLD: Fourier Latent Dynamics for Structured Motion Representation and Learning, Proceedings of the 12th International Conference on Learning Representations (ICLR)

License

Notifications You must be signed in to change notification settings

mit-biomimetics/fld

Repository files navigation

FLD with MIT Humanoid

This repository provides the Fourier Latent Dynamics (FLD) algorithm that represents high-dimension, long-horizon, highly nonlinear, period or quasi-period data in a continuously parameterized space. This work demonstrates its representation and generation capability with a robotic motion tracking task on MIT Humanoid using NVIDIA Isaac Gym.

fld

Paper: FLD: Fourier Latent Dynamics for Structured Motion Representation and Learning
Project website: https://sites.google.com/view/iclr2024-fld/home

Maintainer: Chenhao Li
Affiliation: Biomimetic Robotics Lab, Massachusetts Institute of Technology
Contact: [email protected]

Installation

  1. Create a new python virtual environment with python 3.8

  2. Install pytorch 1.10 with cuda-11.3

     pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
    
  3. Install Isaac Gym

    • Download and install Isaac Gym Preview 4

      cd isaacgym/python
      pip install -e .
      
    • Try running an example

      cd examples
      python 1080_balls_of_solitude.py
      
    • For troubleshooting, check docs in isaacgym/docs/index.html

  4. Install humanoid_gym

     git clone https://github.com/mit-biomimetics/fld.git
     cd fld
     pip install -e .
    

Configuration

  • The workflow consists of two main stages: motion representation and motion learning. In the first stage, the motion data is represented in the latent space using FLD. In the second stage, the latent space is used to train a policy for the robot.
  • The provided code examplifies the training of FLD with human motion data retargeted to MIT Humanoid. The dataset of 9 different motions is stored under resources/robots/mit_humanoid/datasets/misc. 10 trajectories of 240 frames for each motion are stored in a separate .pt file with the format motion_data_<motion_name>.pt. The state dimension indices are specified in reference_state_idx_dict.json under resources/robots/mit_humanoid/datasets/misc.
  • The MIT Humanoid environment is defined by an env file mit_humanoid.py and a config file mit_humanoid_config.py under humanoid_gym/envs/mit_humanoid/. The config file sets both the environment parameters in class MITHumanoidFlatCfg and the training parameters in class MITHumanoidFlatCfgPPO.

Usage

FLD Training

python scripts/fld/experiment.py
  • history_horizon denotes the window size of the input data. A good practice is to set it such that it contains at least one period of the motion.
  • forecast_horizon denotes the number of future steps to predict while maintaining the quasi-constant latent parameterization. For motions with high aperiodicity, this value should be set small. It falls back to PAE when forecast_horizon is set to 1.
  • The training process is visualized by inspecting the Tensorboard logs at logs/<experiment_name>/fld/misc/. The figures include the FLD loss, the reconstruction of sampled trajectories for each motion, the latent parameters in each latent channel along sampled trajectories for each motion with the formed latent manifold, and the latent parameter distribution.
  • The trained FLD model is saved in logs/<experiment_name>/fld/misc/model_<iteration>.pt, where <experiment_name> is defined in the experiment config.
  • The training process is logged in the same folder. Run tensorboard --logdir logs/<experiment_name>/fld/misc/ --samples_per_plugin images=100 to visualize the training loss and plots.
  • A statistics.pt file is saved in the same folder, containing the mean and standard deviation of the input data and the statistics of the latent parameterization space. This file is used to normalize the input data and to define plotting ranges during policy training.

FLD Evaluation

python scripts/fld/evaluate.py
  • A latent_params.pt file is saved in the same folder, containing the latent parameters of the input data. This file is used to define the input data for policy training with the offline task sampler.
  • A gmm.pt file is saved in the same folder, containing the Gaussian Mixture Model (GMM) of the latent parameters. This file is used to define the input data distribution for policy training with the offline gmm task sampler.
  • A set of latent parameters is sampled and reconstructed to the original motion space. The decoded motion is saved in resources/robots/mit_humanoid/datasets/decoded/motion_data.pt. Figure 1 shows the latent sample and the reconstructed motion trajectory. Figure 2 shows the sampled latent parameters. Figure 3 shows the latent manifold of the sampled trajectory, along with the original ones. Figure 4 shows the GMM of the latent parameters.
  • Note that the motion contains only kinematic and proprioceptive information. For visualization only, the global position and orientation of the robot base are approximated by integrating the velocity information with finite difference. Depending on the finite difference method and the intial states, the global position and orientation may be inaccurate and drift over time.

Motion Visualization

python scripts/fld/preview.py
  • To visualize the original motions in the training dataset or the sampled and decoded motions in the Isaac Gym environment, set motion_file to the corresponding motion file.
  • Alternatively, the latent parameters can be interactively modified by setting PLAY_LOADED_DATA to False. The modified latent parameters are then decoded to the original motion space and visualized.

Policy Training

python scripts/train.py --task mit_humanoid
  • Configure the training parameters in humanoid_gym/envs/mit_humanoid/mit_humanoid_config.py.
  • Choose the task sampler by setting MITHumanoidFlatCfgPPO.runner.task_sampler_class_name to OfflineSampler, GMMSampler, RandomSampler or ALPGMMSampler.
  • The trained policy is saved in logs/<experiment_name>/<date_time>_<run_name>/model_<iteration>.pt, where <experiment_name> and <run_name> are defined in the train config.
  • To disable rendering, append --headless.

Policy Playing

python scripts/play.py --load_run "<date_time>_<run_name>"
  • By default the loaded policy is the last model of the last run of the experiment folder.
  • Other runs/model iteration can be selected by setting load_run and checkpoint in the train config.
  • The target motions are randomly selected from the dataset from the path specified by datasets_root. These motions are first encoded to the latent space and then sent to the policy for execution.
  • The fallback mechanism is enabled by default with a theshold of 1.0 on dynamics_error.

Troubleshooting

RuntimeError: nvrtc: error: invalid value for --gpu-architecture (-arch)
  • This error occurs when the CUDA version is not compatible with the installed PyTorch version. A quick fix is to comment out decorator @torch.jit.script in isaacgym/python/isaacgym/torch_utils.py.

Known Issues

The ALPGMMSampler utilizes faiss for efficient similarity search and clustering of dense vectors in the latent parameterization space. The installation of faiss requires a compatible CUDA version. The current implementation is tested with faiss-cpu and faiss-gpu with cuda-10.2.

Citation

@article{li2024fld,
  title={FLD: Fourier Latent Dynamics for Structured Motion Representation and Learning},
  author={Li, Chenhao and Stanger-Jones, Elijah and Heim, Steve and Kim, Sangbae},
  journal={arXiv preprint arXiv:2402.13820},
  year={2024}
}

References

The code is built upon the open-sourced Periodic Autoencoder (PAE) Implementation, Isaac Gym Environments for Legged Robots and the PPO implementation. We refer to the original repositories for more details.

About

Repository for our paper: FLD: Fourier Latent Dynamics for Structured Motion Representation and Learning, Proceedings of the 12th International Conference on Learning Representations (ICLR)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages