A Python-based concealed object segmentation (COS) evaluation toolbox.
This repo provides one-key processing for nine evaluation metrics
- MAE
- weighted F-measure
- S-measure
- max/average/adaptive F-measure
- max/average/adaptive E-measure
To evaluate concealed object segmentation (COS) approaches, you should prepare some libraries with the command: pip install -r requirements.txt
. Then, download benchmark datasets (OneDrive, 1.16GB) and prediction masks (OneDrive, 4.82GB) just play with this command:
python eval.py --dataset-json examples_COS/config_cos_dataset_py_example.json \
--method-json examples_COS/config_cos_method_py_example_all.json \
--metric-npy output_COS/cos_metrics.npy \
--curves-npy output_COS/cos_curves.npy \
--record-txt output_COS/cos_results.txt
Your results will store at ./cos_eval_toolbox/output_COS/cos_results.txt
- Put your prediction masks into a custom file path like
./benchmark/COS-Benchmarking
and prepare your dataset like./cos_eval_toolbox/dataset/COD10K/
. Then, generate the Python-style configs via
python tools/generate_cos_config_files.py
- generate the JSON-style files via
python tools/info_py_to_json.py -i ./examples_COS -o ./examples_COS
- check files via
python tools/check_path.py -m examples_COS/config_cos_method_py_example.json -d examples_COS/config_cos_dataset_py_example.json
- start to evaluate
python eval.py --dataset-json examples_COS/config_cos_dataset_py_example.json \
--method-json examples_COS/config_cos_method_py_example.json \
--metric-npy output_COS/cos_metrics.npy \
--curves-npy output_COS/cos_curves.npy \
--record-txt output_COS/cos_results.txt
@inproceedings{Fmeasure,
title={Frequency-tuned salient region detection},
author={Achanta, Radhakrishna and Hemami, Sheila and Estrada, Francisco and S{\"u}sstrunk, Sabine},
booktitle=CVPR,
number={CONF},
pages={1597--1604},
year={2009}
}
@inproceedings{MAE,
title={Saliency filters: Contrast based filtering for salient region detection},
author={Perazzi, Federico and Kr{\"a}henb{\"u}hl, Philipp and Pritch, Yael and Hornung, Alexander},
booktitle=CVPR,
pages={733--740},
year={2012}
}
@inproceedings{Smeasure,
title={Structure-measure: A new way to eval foreground maps},
author={Fan, Deng-Ping and Cheng, Ming-Ming and Liu, Yun and Li, Tao and Borji, Ali},
booktitle=ICCV,
pages={4548--4557},
year={2017}
}
@inproceedings{Emeasure,
title="Enhanced-alignment Measure for Binary Foreground Map Evaluation",
author="Deng-Ping {Fan} and Cheng {Gong} and Yang {Cao} and Bo {Ren} and Ming-Ming {Cheng} and Ali {Borji}",
booktitle=IJCAI,
pages="698--704",
year={2018}
}
@inproceedings{wFmeasure,
title={How to eval foreground maps?},
author={Margolin, Ran and Zelnik-Manor, Lihi and Tal, Ayellet},
booktitle=CVPR,
pages={248--255},
year={2014}
}
This repo is built on PySODEvalToolkit. We appreciate Dr Pang for his excellent work, please refer README.md for more interesting plays.