Implementation of GGNS based on Pytorch and PyG. We ground Graph Network Simulators with physical sensor information to resolve uncertainties and improve long-term prediction quality. The data we used was created using the Simulation Framework Architecture SOFA.
Physical simulations that accurately model reality are crucial for many engineering disciplines such as mechanical engineering and robotic motion planning. In recent years, learned Graph Network Simulators produced accurate mesh-based simulations while requiring only a fraction of the computational cost of traditional simulators. Yet, the resulting predictors are confined to learning from data generated by existing mesh-based simulators and thus cannot include real world sensory information such as point cloud data. As these predictors have to simulate complex physical systems from only an initial state, they exhibit a high error accumulation for long-term predictions. In this work, we integrate sensory information to ground Graph Network Simulators on real world observations. In particular, we predict the mesh state of deformable objects by utilizing point cloud data. The resulting model allows for accurate predictions over longer time horizons, even under uncertainties in the simulation, such as unknown material properties. Since point clouds are usually not available for every time step, especially in online settings, we employ an imputation-based model. The model can make use of such additional information only when provided, and resorts to a standard Graph Network Simulator, otherwise. We experimentally validate our approach on a suite of prediction tasks for mesh-based interactions between soft and rigid bodies. Our method results in utilization of additional point cloud information to accurately predict stable simulations where existing Graph Network Simulators fail.
Learning Mesh-Based Simulation with Graph Networks
Learning to Simulate Complex Physics with Graph Networks
Build python env using conda and the requirements:
conda env create --file conda-environment.yaml
The Intersection over Union (IoU) metric for 2D data needs additional PyMesh, Shapely and Open3D (only for evaluation) packages.
PyMesh can be installed following the instruction "Download the Source" from here.
On the BwUniCluster 2.0 without sudo
rights the steps to be completed are a little bit different.
Shapely and Open3D can be installed in your conda environment using:
pip install open3d
pip install Shapely
All data was generated using SOFA. The datasets used in the paper can be downloaded from here.
If you want to create your own data using SOFA you can request data generation code
from the ALR. Put all dataset to use into a ./data/sofa
and all trained models in the ./models
folder.
There are three datasets used in the paper:
- the deformable_plate_dataset containing the deformation a family of 2d trapezoids
- random collider start position and size
- different material properties
- the tissue_manipulation_dataset, which includes a tissue manipulation environment.
- random movement on a 2d plane of the gripper
- different grasping points on the tissue
- different material properties
- the cavity_grasping_dataset including the deformation of a 3d cavity by a Panda Gripper
- random bottom and top radii for cone shapes
- random grasping position in all three axes
- different material properties of the cavity
More details on the datasets can be found under dataset_details.
To test if training the simulator works, simply run:
python main.py
We use WandB for logging, so make sure you are logged into wandb in your environment. To train the model on 2D data using the base model run:
python main.py -c default_plate
To train the model on 3D data using the base model run:
python main.py -c default_tissue
or
python main.py -c default_cavity
You can specify your own run by creating your own config.yaml
file.
To perform a WandB sweep specify the sweep as usual and choose one of the train_*_sweep.py
files as program
.
On a cluster using SLURM, e.g. on the BwUniCluster 2.0, a sweep can be executed using if your conda env is acitvated and your WandB API key is deposited:
sbatch --gres=gpu:1 -t 24:00:00 -p gpu_8 -N 1 --output=path-to-slurm/slurm-%j.out --error=path-to-error/slurm-%j.err --job-name=job-name --count 10 name/project/sweepid
You can evaluate a trained model by using the evaluate_model.py
script.
By default, the model is tested on complete rollouts on the test set.
It additionally takes several input arguments --arg
to modify the evaluation like saving the visualized rollouts.
Make sure to pass the data path of your model weights, located under models
to the function.
For example, to evaluate our trained GGNS model on the Deformable Plate Dataset run:
python evaluate_model.py -r 'example_models/plate/ggns'
modules
contains the Graph Neural Network models, their utility functions and the custom datasets.
In the src
folder you find the training algorithms and their utility functions.
The utils
folder contains utils that are used throughout the project.
If you use our code please cite this paper
@inproceedings{
linkerh{\"a}gner2023grounding,
title={Grounding Graph Network Simulators using Physical Sensor Observations},
author={Jonas Linkerh{\"a}gner and Niklas Freymuth and Paul Maria Scheikl and Franziska Mathis-Ullrich and Gerhard Neumann},
booktitle={International Conference on Learning Representations},
year={2023},
url={https://openreview.net/forum?id=jsZsEd8VEY}
}