Skip to content

Stimulating the Diffusion Model for Image Denoising via Adaptive Embedding and Ensembling

Notifications You must be signed in to change notification settings

Li-Tong-621/DMID

Repository files navigation

Stimulating Diffusion Model for Image Denoising via Adaptive Embedding and Ensembling


Abstract: Image denoising is a fundamental problem in computational photography, where achieving high perception with low distortion is highly demanding. Current methods either struggle with perceptual quality or suffer from significant distortion. Recently, the emerging diffusion model has achieved state-of-the-art performance in various tasks and demonstrates great potential for image denoising. However, stimulating diffusion models for image denoising is not straightforward and requires solving several critical problems. For one thing, the input inconsistency hinders the connection between diffusion models and image denoising. For another, the content inconsistency between the generated image and the desired denoised image introduces distortion. To tackle these problems, we present a novel strategy called the Diffusion Model for Image Denoising (DMID) by understanding and rethinking the diffusion model from a denoising perspective. Our DMID strategy includes an adaptive embedding method that embeds the noisy image into a pre-trained unconditional diffusion model and an adaptive ensembling method that reduces distortion in the denoised image. Our DMID strategy achieves state-of-the-art performance on both distortion-based and perception-based metrics, for both Gaussian and real-world image denoising.


🚀 News

  • 2023.5 The first version of the manuscript and the code were finished.

  • 2024.5.10 Our paper has been accepted by TPAMI! 🎉

  • 2024.6.4 The code and the tools are all supplemented and released! 🎊

⏳ Todo lists

  • We will supplement the code about noise transformation within a month (before 6.12).
  • We may release our other methods ...

Pipeline of DMID

Quick Start

  • Download the pre-trained unconditional diffusion model(from OpenAI) and place it in ./pre-trained/.

  • Tp quick start, just run:

python main_for_gaussian.py

Evaluation

  • All the visual results are available.
  • Download testsets (CBSD68, Kodak24, McMaster, Urban100, ImageNet), and place the testsets in './data/', eg './data/CBSD68'.
  • Download the testsets after noise transformation (CC, PolyU, FMDD), and replace the folder named '.pre-trained' with the downloaded testsets.
  • Download the pre-trained unconditional diffusion model(from OpenAI) and place it in ./pre-trained/.
  • To quickly reproduce the reported results, run
sh evaluate.sh

Tools

python utils_cal_N.py
python utils_cal_N_2.py
  • 🔨 To perform our improved noise transformation method by yourself, or denoised any give noisy image, please firstly perform noise transformation and then denoise the intermediate image:
python main_for_real_NT.py
python main_for_real.py
  • 🔨 We provide a new code for real-world image denoising (main_for_real.py), because there are some errors, which i didn't find, in original code for real-world image denoising (main_for_real_o.py).

Results

Classical Gaussion denoising
Robust Gaussion denoising
Real-world image denoising
Compared with other diffusion-based methods

About

Stimulating the Diffusion Model for Image Denoising via Adaptive Embedding and Ensembling

Resources

Stars

Watchers

Forks

Packages

No packages published