Skip to content

weichen582/GLADNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GLADNet

This is a Tensorflow implantation of GLADNet

GLADNet: Low-Light Enhancement Network with Global Awareness. In FG'18 Workshop FOR-LQ 2018
Wenjing Wang*, Chen Wei*, Wenhan Yang, Jiaying Liu. (* indicates equal contributions)

Paper, Project Page

Teaser Image

Requirements

  1. Python
  2. Tensorflow >= 1.3.0
  3. numpy, PIL

Testing Usage

To quickly test your own images with our model, you can just run through

python main.py 
    --use_gpu=1 \                           # use gpu or not
    --gpu_idx=0 \
    --gpu_mem=0.5 \                         # gpu memory usage
    --phase=test \
    --test_dir=/path/to/your/test/dir/ \
    --save_dir=/path/to/save/results/ \

Training Usage

First, download training data set from our project page. Save training pairs of our LOL dataset under ./data/train/low/, and synthetic pairs under ./data/train/normal/. Then, start training by

python main.py
    --use_gpu=1 \                           # use gpu or not
    --gpu_idx=0 \
    --gpu_mem=0.8 \                         # gpu memory usage
    --phase=train \
    --epoch=50 \                           # number of training epoches
    --batch_size=8 \
    --patch_size=384 \                       # size of training patches
    --base_lr=0.001 \                      # initial learning rate for adm
    --eval_every_epoch=5 \                 # evaluate and save checkpoints for every # epoches
    --checkpoint_dir=./checkpoint           # if it is not existed, automatically make dirs
    --sample_dir=./sample                   # dir for saving evaluation results during training

Experiment Results

Subjective Results

Subjective Result

Objective Results

We use the Naturalness Image Quality Evaluator (NIQE) no-reference image quality score for quantitative comparison. NIQE compares images to a default model computed from images of natural scenes. A smaller score indicates better perceptual quality.

Dataset DICM NPE MEF Average
MSRCR 3.117 3.369 4.362 3.586
LIME 3.243 3.649 4.745 3.885
DeHZ 3.608 4.258 5.071 4.338
SRIE 2.975 3.127 4.042 3.381
GLADNet 2.761 3.278 3.468 3.184

Computer Vision Application

We test several real low-light images and their corresponding enhanced results on Google Cloud Visio API. GLADNet helps it to identify the objects in this image.

APP1

APP2

Citation

@inproceedings{wang2018gladnet,
 title={GLADNet: Low-Light Enhancement Network with Global Awareness},
 author={Wang, Wenjing and Wei, Chen and Yang, Wenhan and Liu, Jiaying},
 booktitle={Automatic Face \& Gesture Recognition (FG 2018), 2018 13th IEEE International Conference},
 pages={751--755},
 year={2018},
 organization={IEEE}
}

Related Follow-Up Work

Deep Retinex Decomposition: Deep Retinex Decomposition for Low-Light Enhancement. Chen Wei*, Wenjing Wang*, Wenhan Yang, Jiaying Liu. (* indicates equal contributions) In BMVC'18 (Oral Presentation) Website Github

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages