Feedback Network for Image Super-Resolution [arXiv] [CVF] [Poster]
This repository is Pytorch code for our proposed SRFBN.
The code is developed by Paper99 and penguin1214 based on BasicSR, and tested on Ubuntu 16.04/18.04 environment (Python 3.6/3/7, PyTorch 0.4.0/1.0.0/1.0.1, CUDA 8.0/9.0/10.0) with 2080Ti/1080Ti GPUs.
The architecture of our proposed SRFBN. Blue arrows represent feedback connections. The details about our proposed SRFBN can be found in our main paper.
If you find our work useful in your research or publications, please consider citing:
@inproceedings{li2019srfbn,
author = {Li, Zhen and Yang, Jinglei and Liu, Zheng and Yang, Xiaomin and Jeon, Gwanggil and Wu, Wei},
title = {Feedback Network for Image Super-Resolution},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year= {2019}
}
@inproceedings{wang2018esrgan,
author = {Wang, Xintao and Yu, Ke and Wu, Shixiang and Gu, Jinjin and Liu, Yihao and Dong, Chao and Qiao, Yu and Loy, Chen Change},
title = {ESRGAN: Enhanced super-resolution generative adversarial networks},
booktitle = {The European Conference on Computer Vision Workshops (ECCVW)},
year = {2018}
}
- Python 3 (Anaconda is recommended)
- skimage
- imageio
- Pytorch (Pytorch version >=0.4.1 is recommended)
- tqdm
- pandas
- cv2 (pip install opencv-python)
- Matlab
-
Clone this repository:
git clone https://github.com/Paper99/SRFBN_CVPR19.git
-
Download our pre-trained models from GoogleDrive or BaiduYun(code:6qta), unzip the models and place them to
./models
. -
Then, cd to
SRFBN_CVPR19
and run one of following commands for evaluation on Set5:python test.py -opt options/test/test_SRFBN_x2_BI.json python test.py -opt options/test/test_SRFBN_x3_BI.json python test.py -opt options/test/test_SRFBN_x4_BI.json python test.py -opt options/test/test_SRFBN_x3_BD.json python test.py -opt options/test/test_SRFBN_x3_DN.json
-
Finally, PSNR/SSIM values for Set5 are shown on your screen, you can find the reconstruction images in
./results
.
-
If you have cloned this repository and downloaded our pre-trained models, you can first download SR benchmark (Set5, Set14, B100, Urban100 and Manga109) from GoogleDrive or BaiduYun(code:z6nz).
-
Run
./results/Prepare_TestData_HR_LR.m
in Matlab to generate HR/LR images with different degradation models. -
Edit
./options/test/test_SRFBN_example.json
for your needs according to./options/test/README.md
. -
Then, run command:
cd SRFBN_CVPR19 python test.py -opt options/test/test_SRFBN_example.json
-
Finally, PSNR/SSIM values are shown on your screen, you can find the reconstruction images in
./results
. You can further evaluate SR results using./results/Evaluate_PSNR_SSIM.m
.
-
If you have cloned this repository and downloaded our pre-trained models, you can first place your own images to
./results/LR/MyImage
. -
Edit
./options/test/test_SRFBN_example.json
for your needs according to./options/test/README.md
. -
Then, run command:
cd SRFBN_CVPR19 python test.py -opt options/test/test_SRFBN_example.json
-
Finally, you can find the reconstruction images in
./results
.
-
Download training set DIV2K [Official Link] or DF2K [GoogleDrive] [BaiduYun] (provided by BasicSR).
-
Run
./scripts/Prepare_TrainData_HR_LR.m
in Matlab to generate HR/LR training pairs with corresponding degradation model and scale factor. (Note: Please place generated training data to SSD (Solid-State Drive) for fast training) -
Run
./results/Prepare_TestData_HR_LR.m
in Matlab to generate HR/LR test images with corresponding degradation model and scale factor, and choose one of SR benchmark for evaluation during training. -
Edit
./options/train/train_SRFBN_example.json
for your needs according to./options/train/README.md
. -
Then, run command:
cd SRFBN_CVPR19 python train.py -opt options/train/train_SRFBN_example.json
-
You can monitor the training process in
./experiments
. -
Finally, you can follow the test pipeline to evaluate your model.
Average PSNR/SSIM for scale factors x2, x3 and x4 with BI degradation model. The best performance is shown in red and the second best performance is shown in blue.
Average PSNR/SSIM values for scale factor x3 with BD and DN degradation models. The best performance is shown in red and the second best performance is shown in blue.
Qualitative results with BI degradation model (x4) on “img 004” from Urban100.
Qualitative results with BD degradation model (x3) on “MisutenaideDaisy” from Manga109.
Qualitative results with DN degradation model (x3) on “head” from Set14.
- Curriculum learning for complex degradation models (i.e. BD and DN degradation models).
- SRFBN-S pretrained models.