Kangliang Liu, Xiangcheng Du, Sijie Liu, Yingbin Zheng, Xingjiao Wu, Cheng jin
Fudan University, Videt Technology
[paper]
-
Clone our repository
git clone https://github.com/Merenguelkl/DDT.git cd DDT
-
Make conda environment
conda create -n DDT python=3.8 conda activate DDT
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia pip3 install matplotlib scikit-learn scikit-image opencv-python yacs joblib natsort h5py tqdm pip3 install einops gdown addict future lmdb numpy pyyaml requests scipy tb-nightly yapf lpips timm fvcore
-
Install basicsr
python setup.py develop --no_cuda_ext
-
Real Denoising
Download SIDD dataset and generate patches from full-resolution training images
python download_data.py --data train --noise real python download_data.py --noise real --data test --dataset SIDD python generate_patches_sidd.py
-
Synthetic Denoising
Download training (DIV2K, Flickr2K, WED, BSD) and testing datasets and generate patches from full-resolution training images
python download_data.py --data train-test --noise gaussian python generate_patches_dfwb.py
-
Real Denoising
./train.sh Denoising/Options/RealDenoising_DDT.yml
-
Synthetic Denoising
./train.sh Denoising/Options/GaussianColorDenoising_DDT.yml
Note: This training script uses 4 GPUs by default. To use any other number of GPUs, modify DDT/train.sh
and DDT/Denoising/Options/*.yml
Download pretrained model from Google Drive.
Update path in eval.py
.
python eval.py
The visualized outputs will be gererated in DDT/visualization
If you are interested in this work, please consider citing:
@inproceedings{liu2023ddt,
title={DDT: Dual-branch deformable transformer for image denoising},
author={Liu, Kangliang and Du, Xiangcheng and Liu, Sijie and Zheng, Yingbin and Wu, Xingjiao and Jin, Cheng},
booktitle={2023 IEEE International Conference on Multimedia and Expo (ICME)},
pages={2765--2770},
year={2023},
organization={IEEE}
}
This code is based on the BasicSR and Restormer. Thanks for their awesome works.
If you have any question or suggestion, please contact [email protected].