Skip to content

debugwangz/Sinogram-Inpainting

 
 

Repository files navigation

2-Step Sparse-View CT Reconstruction with a Domain-Specific Perceptual Network

Dependencies:

Some models such as FISTA-TV require another environment, specified in its document.

Instructions:

  1. Run the commands indicated in each model's folder to train or test the models with given checkpoint files and toy datasets (example test images that are excluded from the training set).

    • Train/test SIN, and then train/test 4c-PRN. Intermediate results are saved in Toy-Dataset.
  2. Evaluate the predicted reconstruction images with Compute_Metrics.ipynb.

  3. For dataset download please refer to further instructions in the Data folder. The jupyter notebook TCIA_data_preprocessing.ipynb includes our way of data augmentation.

  4. We provide our implementation of the state-of-the-art models mentioned in the paper: FISTA-TV, cGAN, and Neumann Networks in State-of-the-art folder.

Cone-Beam Example Results:

This is a preliminary experiment for extension to 3D cone-beam reconstructions, the walnut data come from https://github.com/cicwi/WalnutReconstructionCodes.

Sparse-View FDK SIN SIN-4c-PRN Ground Truth

Please cite our paper:

@article{wei20202,
  title={2-step sparse-view ct reconstruction with a domain-specific perceptual network},
  author={Wei, Haoyu and Schiffers, Florian and W{\"u}rfl, Tobias and Shen, Daming and Kim, Daniel and Katsaggelos, Aggelos K and Cossairt, Oliver},
  journal={arXiv preprint arXiv:2012.04743},
  year={2020}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 94.9%
  • Python 5.1%