Recently, CycleGAN-based methods have been widely applied to the unsupervised image dehazing and achieved significant results. However, most existing CycleGAN-based methods ignore that the input of the generator contains two different distributions of data which can often lead to confusion in the learning process of the generator, consequently limiting the final dehazing performance. Moreover, reconstructing clear images through model architecture design and loss functions is an indirect constraint, making it difficult to compensate for the missing high-frequency information, such as textures and structures in the extracted features from hazy images. To address these issues, in this paper, we propose an Unsupervised Multi-Branch with High-Frequency Enhancement Network (UME-Net) which contain an Multi-Branch Dehazing Network (MBDN) and a High-Frequency Components Enhancement Module (HFEM). Specifically, MBDN constructs a single unsupervised dehazing network with Shared Encoding Module (SEM) and Multi-Branch Decoding Module (MDM). SEM enhance the consistency of feature representation and MDM effectively addresses the confusion during the generator learning process in CycleGAN-based methods. Furthermore, based on a key observation that hazy images and their corresponding clear images exhibit only subtle differences in high-frequency information, the HFEM is designed to compensates for the missing high-frequency information in the network which further enhances the reconstruction capability of the UME-Net for restore edge and texture information in obscured by dense haze. Experimental results on challenging benchmark datasets demonstrate the superiority of our UME-Net over SOTA unsupervised image dehazing methods.
Unsupervised Multi-Branch Network with High-Frequency Enhancement for Image Dehazing
Exploring the documentation for UME-Net »
Check Demo
·
Report Bug
·
Pull Request
- Dependencies
- Filetree
- Pretrained Model
- Train
- Test
- Clone the repo
- Qualitative Results
- Results on Statehaze1k-Thin remote sensing Dehazing Challenge testing images:
- Results on Statehaze1k-Moderate remote sensing Dehazing Challenge testing images:
- Results on Statehaze1k-Thick remote sensing Dehazing Challenge testing images:
- Results on NTIRE 2021 NonHomogeneous Dehazing Challenge testing images:
- Results on RESIDE-Outdoor Dehazing Challenge testing images:
- Copyright
- Thanks
- Pytorch 1.8.0
- Python 3.7.1
- CUDA 11.7
- Ubuntu 18.04
├── README.md
├── /UME-Net/
| ├── train.py
| ├── test.py
| ├── Model.py
| ├── Model_util.py
| ├── perceptual.py
| ├── train_dataset.py
| ├── test_dataset.py
| ├── utils_test.py
| ├── make.py
│ ├── Parameter_test.py
│ ├── Loss.py
│ │ ├── __init__.py
│ ├── /datasets_train/
│ │ ├── /hazy/
│ │ ├── /clean/
│ ├── /datasets_test/
│ │ ├── /hazy/
│ │ ├── /clean/
│ ├── /output_result/
├── LICENSE.txt
└── /images/
Download our model weights on Baidu cloud disk: https://pan.baidu.com/s/1fHNpDVOWV2KqouEIkYS2OA?pwd=lzms
Download our test datasets on Baidu cloud disk: https://pan.baidu.com/s/1HK1oy4SjZ99N-Dh-8_s0hA?pwd=lzms
python train.py -train_batch_size 4 --gpus 0 --type 5
python test.py --gpus 0 --type 5
git clone https://github.com/thislzm/UME-Net.git
The project has been licensed by MIT. Please refer to for details. LICENSE.txt