Skip to content

Official implementation of our paper "Sagiri: LDR image enhancement by Generation".

License

Notifications You must be signed in to change notification settings

IP-Enhancement/Sagiri

 
 

Repository files navigation

Sagiri: Low Dynamic Range Image Enhancement with Generative Diffusion Prior

Paper | Project Page

Baiang Li1, 5, Sizhuo Ma3, Yanhong Zeng1, Xiaogang Xu2, 4, Youqing Fang1, Zhao Zhang5, Jian Wang3✝, Kai Chen1✝
Corresponding Authors.
1Shanghai AI Laboratory
2The Chinese University of Hong Kong
3Snap Inc.
4Zhejiang University
5Hefei University of Technology

Our task:

📖Table Of Contents

Visual results on entire real-world image

Left: Input image; Medium: After previous methods; Right: LS-Sagiri(Ours)

Visual results on selected region

Left: Input image; Medium: Previous methods; Right: LS-Sagiri(Ours)

Sagiri plugged after other methods

Left: Input image; Medium: SingleHDR; Right: SingleHDR+Sagiri(Ours)
Left: Input image; Medium: LCDPNet; Right: LCDPNet+Sagiri(Ours)

Controlling where and what to generate

First: Input image; Second: SingleHDR; Third: SingleHDR+Sagiri (with prompt a); Fourth: SingleHDR+Sagiri (with prompt b);
Prompt a: ``A white waterfall is flowing down from the cliff, surrounded by rocks and trees.'';
Prompt b: ``The sun is setting, and the sky is filled with clouds.''

Update

  • 2024.06: This repo is released.

Start installation

# clone this repo
git clone https://github.com/openmmlab/Sagiri.git
cd Sagiri

# create an environment with python >= 3.9
conda create -n sagiri python=3.9
conda activate sagiri
pip install -r requirements.txt

Pretrained Models

Model Name Description BaiduNetdisk
stage1.ckpt Stage1 for brightness and color adjustment. download
stage2.ckpt Sagiri for conditional image generation. download

Inference

Stage 1 inference

Note that we can use other restoration models to finish stage 1's process.

python scripts/inference_stage1.py \
--config configs/model/swinir.yaml \
--ckpt /path/to/stage1/model \
--input /path/to/input/images \
--output /path/to/output/images

Sagiri inference

python infer_Sagiri.py \
--config configs/model/cldm.yaml \
--ckpt  /path/to/stage2/model\
--steps 30 \
--input /path/to/input/images \
--output /path/to/output/images \
--disable_preprocess_model \
--device cuda

LS-Sagiri inference

python infer_LSSagiri.py \
--config configs/model/cldm.yaml \
--ckpt  /path/to/stage2/model\
--steps 30 \
--input /path/to/input/images \
--output /path/to/output/images \
--device cuda

Start training

```shell
python train.py --config [training_config_path]
```

Citation

Please cite us if our work is useful for your research.

@article{li2024sagiri,
  author    = {Baiang Li and Sizhuo Ma and Yanhong Zeng and Xiaogang Xu and Youqing Fang and Zhao Zhang and Jian Wang and Kai Chen},
  title     = {Sagiri: Low Dynamic Range Image Enhancement with Generative Diffusion Prior},
  journal   = {arxiv},
  year      = {2024},
}

License

This project is released under the Apache 2.0 license.

Acknowledgement

This project is based on ControlNet, BasicSR and DiffBIR. Thanks for their awesome work.

Contact

Should you have any questions, please feel free to contact with me at [email protected].

About

Official implementation of our paper "Sagiri: LDR image enhancement by Generation".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.9%
  • Shell 0.1%