Pytorch implementation for MEANet: Multi-Modal Edge-Aware Network for Light Field Salient Object Detection.
- Python 3.6
- Torch 1.10.2
- Torchvision 0.4.0
- Cuda 10.0
- Tensorboard 2.7.0
- Download the training dataset and modify the 'train_data_path'.
- Start to train with
python -m torch.distributed.launch --nproc_per_node=4 train.py
- Download the testing dataset and have it in the 'dataset/test/' folder.
- Download the already-trained MEANet model and have it in the 'trained_weight/' folder.
- Change the
weight_name
intest.py
to the model to be evaluated. - Start to test with
python test.py
We released two versions of the trained model:
Trained with additional 100 samples from HFUT-Lytro on baidu pan with fetch code: 0o0r or on Google drive
Trained only with DUTLF-FS on baidu pan with fetch code: 75bn or on Google drive
We released two versions of the saliency map:
Trained with additional 100 samples from HFUT-Lytro on baidu pan with fetch code: x7xa or on Google drive
Trained only with DUTLF-FS on baidu pan with fetch code: s7vn or on Google drive
Please cite our paper if you find the work useful:
@article{JIANG202278,
title = {MEANet: Multi-modal edge-aware network for light field salient object detection},
journal = {Neurocomputing},
volume = {491},
pages = {78-90},
year = {2022},
author = {Yao Jiang and Wenbo Zhang and Keren Fu and Qijun Zhao}
}