You have the choice to install the latest release via PyPI by running
pip install loco-mujoco
or you do an editable installation by cloning this repository and then running:
cd loco-mujoco
pip install -e .
Note
We fixed the version of MuJoCo to 2.3.7 during installation since we found that there are slight differences in the simulation, which made testing very difficult. However, in practice, you can use any newer version of MuJoCo! Just install it after installing LocoMuJoCo.
After installing LocoMuJoCo, new commands for downloading the datasets will be setup for you. You have the choice of downloading all datasets available or only the ones you need. For example, run the following command to install all datasets:
loco-mujoco-download
Run the following command to install only the real (motion capture, no actions) datasets:
loco-mujoco-download-real
Run the following command to install only the perfect (ground-truth with actions) datasets:
loco-mujoco-download-perfect
If you also want to run the baselines, you have to install our imitation learning library imitation_lib.
To verify that everything is installed correctly, run the examples such as:
python examples/simple_mushroom_env/example_unitree_a1.py
To replay a dataset run:
python examples/replay_datasets/replay_Unitree.py
You want a quick overview of all environments, tasks and datasets available? :doc:`Here <loco_mujoco.environments>` you can find it.
And stay tuned! There are many more to come ...
LocoMuJoCo is very easy to use. Just choose and create the environment, and generate the dataset belonging to this task and you are ready to go!
import numpy as np
import loco_mujoco
import gymnasium as gym
env = gym.make("LocoMujoco", env_name="HumanoidTorque.run")
dataset = env.create_dataset()
You want to use LocoMuJoCo for pure reinforcement learning? No problem! Just define your custom reward function and pass it to the environment!
import numpy as np
import loco_mujoco
import gymnasium as gym
import numpy as np
def my_reward_function(state, action, next_state):
return -np.mean(action)
env = gym.make("LocoMujoco", env_name="HumanoidTorque.run", reward_type="custom",
reward_params=dict(reward_callback=my_reward_function))
LocoMuJoCo natively supports MushroomRL:
import numpy as np
from loco_mujoco import LocoEnv
env = LocoEnv.make("HumanoidTorque.run")
dataset = env.create_dataset()
You can find many more examples here.
@inproceedings{alhafez2023b, title={LocoMuJoCo: A Comprehensive Imitation Learning Benchmark for Locomotion}, author={Firas Al-Hafez and Guoping Zhao and Jan Peters and Davide Tateo}, booktitle={6th Robot Learning Workshop, NeurIPS}, year={2023} }
Both Unitree models were taken from the MuJoCo menagerie.