Skip to content

PyTorch implementation for "Point Cloud Diffusion Models for Automatic Implant Generation" (MICCAI 2023)

License

Notifications You must be signed in to change notification settings

pfriedri/pcdiff-implant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Point Cloud Diffusion Models for Automatic Implant Generation

License: MIT Static Badge arXiv

This is the official PyTorch implementation of the MICCAI 2023 paper Point Cloud Diffusion Models for Automatic Implant Generation by Paul Friedrich, Julia Wolleb, Florentin Bieder, Florian M. Thieringer and Philippe C. Cattin.

If you find our work useful, please consider to ⭐ star this repository and 📝 cite our paper:

@InProceedings{10.1007/978-3-031-43996-4_11,
                author="Friedrich, Paul and Wolleb, Julia and Bieder, Florentin and Thieringer, Florian M. and Cattin, Philippe C.",
                title="Point Cloud Diffusion Models for Automatic Implant Generation",
                booktitle="Medical Image Computing and Computer Assisted Intervention -- MICCAI 2023",
                year="2023",
                pages="112--122",
               }

Paper Abstract

Advances in 3D printing of biocompatible materials make patient-specific implants increasingly popular. The design of these implants is, however, still a tedious and largely manual process. Existing approaches to automate implant generation are mainly based on 3D U-Net architectures on downsampled or patch-wise data, which can result in a loss of detail or contextual information. Following the recent success of Diffusion Probabilistic Models, we propose a novel approach for implant generation based on a combination of 3D point cloud diffusion models and voxelization networks. Due to the stochastic sampling process in our diffusion model, we can propose an ensemble of different implants per defect from which the physicians can choose the most suitable one. We evaluate our method on the SkullBreak and SkullFix dataset, generating high-quality implants and achieving competitive evaluation scores.

Data

We trained our network on the publicly available parts of the SkullBreak/SkullFix datasets. The data is available at:

The provided code works for the following data structure:

datasets
└───SkullBreak
    └───complete_skull
    └───defective_skull
        └───bilateral
        └───frontoorbital
        └───parietotemporal
        └───random_1
        └───random_2   
    └───implant
        └───bilateral
        └───frontoorbital
        └───parietotemporal
        └───random_1
        └───random_2
└───SkullFix
    └───complete_skull
    └───defective_skull
    └───implant

Training & Using the Networks

Both networks, the point cloud diffusion model and the voxelization network are trained independently:

  • Information on training and using the point cloud diffusion model can be found here
  • Information on training and using the voxelization network can be found here

Implementation Details for Comparing Methods

All experiments were performed on an NVIDIA A100 (40GB) GPU.

Results

Results on the SkullBreak dataset

In the following table, we present the achieved evaluation scores as mean values over the SkullBreak test set. We evaluate the Dice Score (DSC), the 10 mm boundary Dice Score (bDSC) and the 95 percentile Hausdorff Distance (HD95). An example implant is shown here.

Method DSC bDSC HD95
3D U-Net 0.87 0.91 2.32
3D U-Net (sparse) 0.71 0.80 4.60
2D U-Net 0.87 0.89 2.13
Ours 0.86 0.88 2.51
Ours (n=5) 0.87 0.89 2.45

Results on the SkullFix dataset

In the following table, we present the achieved evaluation scores as mean values over the SkullFix test set. We evaluate the Dice Score (DSC), the 10 mm boundary Dice Score (bDSC) and the 95 percentile Hausdorff Distance (HD95). An example implant is shown here.

Method DSC bDSC HD95
3D U-Net 0.91 0.95 1.79
3D U-Net (sparse) 0.81 0.87 3.04
2D U-Net 0.89 0.92 1.98
Ours 0.90 0.92 1.73
Ours (n=5) 0.90 0.93 1.69

Runtime & GPU memory requirement information

In the following table, we present detailed runtime, as well as GPU memory requirement information. All values have been measured on a system with an AMD EPYC 7742 CPU and an NVIDIA A100 (40GB) GPU.

Dataset (Method) Point Cloud Diffusion Model Voxelization Network Total Time
SkullBreak (without ensembling, n=1) ~ 979 s, 4093 MB ~ 23 s, 12999 MB ~ 1002 s
SkullBreak (with ensembling, n=5) ~ 1101 s, 12093 MB ~ 41 s, 12999 MB ~ 1142 s
SkullFix (without ensembling, n=1) ~ 979 s, 4093 MB ~ 92 s, 12999 MB ~ 1071 s
SkullFix (with ensembling, n=5) ~ 1101 s, 12093 MB ~ 109 s, 12999 MB ~ 1210 s

Generating implants for the SkullFix dataset takes longer, as the volume output by the voxelization network (512 x 512 x 512) needs to be resampled to the initial volume size (512 x 512 x Z), which varies for different implants.

Acknowledgements

Our code is based on/ inspired by the following repositories:

The code for computing the evaluation scores is based on:

About

PyTorch implementation for "Point Cloud Diffusion Models for Automatic Implant Generation" (MICCAI 2023)

Topics

Resources

License

Stars

Watchers

Forks