The official PyTorch implementation of oral paper "FocusCut: Diving into a Focus View in Interactive Segmentation" in CVPR2022.
See requirements.txt for the environment.
pip3 install -r requirements.txt
Put the pretrained models into the folder "pretrained_model" and the unzipped datasets into the folder "dataset".
CUDA_VISIBLE_DEVICES=0 python main.py -rf -ap "backbone='resnet50'"
CUDA_VISIBLE_DEVICES=0 python main.py -v -r focuscut-resnet50.pth -ap "backbone='resnet50'" -hrv -dv GrabCut,Berkeley,DAVIS
CUDA_VISIBLE_DEVICES=0 python annotator.py -r focuscut-resnet50.pth -ap "backbone='resnet50'" -hrv -img test.jpg
(These datasets are organized into a unified format, our Interactive Segmentation Format (ISF)
- GrabCut ( GoogleDrive | BaiduYun )
- Berkeley (GoogleDrive | BaiduYun )
- DAVIS (GoogleDrive | BaiduYun )
- SBD (GoogleDrive | BaiduYun )
- focuscut-resnet50 (GoogleDrive | BaiduYun )
- focuscut-resnet101 (GoogleDrive | BaiduYun )
If you find this work or code is helpful in your research, please cite:
@inproceedings{lin2022focuscut,
title={FocusCut: Diving into a Focus View in Interactive Segmentation},
author={Lin, Zheng and Duan, Zheng-Peng and Zhang, Zhao and Guo, Chun-Le and Cheng, Ming-Ming},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2637--2646},
year={2022}
}
If you have any questions, feel free to contact me via: frazer.linzheng(at)gmail.com
.
Welcome to visit the project page or my home page.
Licensed under a Creative Commons Attribution-NonCommercial 4.0 International for Non-commercial use only. Any commercial use should get formal permission first.