Skip to content

takuro-matsui/DefenceNet

Repository files navigation

DefenceNet

Single-Image Fence Removal Using Deep Convolutional Neural Network (IEEE Access)

[Paper Link (IEEE Access)] [Paper Link (EI 2020)]

In public spaces such as zoos and sports facilities, the presence of fences often annoys tourists and professional photographers. There is a demand for a post-processing tool to produce a non-occluded view from an image or video. This “de-fencing” task is divided into two stages: one to detect fence regions and the other to fill the missing part. For over a decade, various methods have been proposed for video-based de-fencing. However, only a few single-image-based methods are proposed. In this paper, we focus on single-image fence removal. Conventional approaches suffer from inaccurate and non-robust fence detection and inpainting due to less content information. To solve these problems, we combine novel methods based on a deep convolutional neural network (CNN) and classical domain knowledge in image processing. In the training process, we are required to obtain both fence images and corresponding non-fence ground truth images. Therefore, we synthesize natural fence images from real images. Moreover, spacial filtering processing (e.g. a Laplacian filter and a Gaussian filter) improves the performance of the CNN for detection and inpainting. Our proposed method can automatically detect a fence and generate a clean image without any user input. Experimental results demonstrate that our method is effective for a broad range of fence images.

framework2.png

Citation

Please cite this paper if you use this code.

@ARTICLE{8933392, 
author={T. {Matsui} and M. {Ikehara}}, 
journal={IEEE Access}, 
title={Single-Image Fence Removal Using Deep Convolutional Neural Network}, 
year={2020}, 
volume={8}, 
number={}, 
pages={38846-38854},}

Demo

Run demo_defence.m

Installation

Summary of conventional de-fencing methods and our proposed method

Video-based Multiple image-based Single mage-based
[3][4][5][6][7] [10][11] [1] [13] [14] Ours
Synthesize multifocused images Use temporal information Key point detection and K-means clustering Online learning Use color similarity based on user input CNN and image filtering
Pros (+)Relatively high performance in static scene (+)Applicable to other objective removal tasks (+)End-to-end algorithm for regular fences (+)End-to-end algorithm for regular and near-regular fences (+)Able to detect even irregular fences (+)End-to-end algorithm regardless of fence colors and shapes

(+)Natural appearance
Cons (-)Only for videos

(-)High computational cost
(-)Need to prepare for taking photos (-)Not able to detect near-regular and irregular fences

(-)High computational cost
(-)Not able to detect irregular fences

(-)High computational cost
(-)Need skilled-user intervention

(-)Not able to distinguish fences and background that have similar color
(-)Week to fence orientation and certain angle

Dataset

DetectNet

To train U-Net, we collect 545 real-world fence images and binary masks created by Du et al.. From these images, we cropped 128 × 128 × 3 patches. In order to increase the amount of data for training improvement, the cropped patches are randomly flipped, rotated, zoomed and brightened.

RemoveNet

Fence images are created by combining fence masks with the clean outdoor images from UCID dataset and from the BSD dataset in the directry /dataset/train/. If you would like to create a new pair of a fence image and a fence mask, please run as following:

[fence_image, fence_mask] = add_fence(img, theta, scale, color_num, noise, real).

Author

takuro-matsui

If you have any questions, please feel free to send us an e-mail [email protected].

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages