Skip to content
/ BUS Public

[CVPR 2024] Open-Set Domain Adaptation for Semantic Segmentation

Notifications You must be signed in to change notification settings

KHU-AGI/BUS

Repository files navigation

[CVPR 2024] Open-Set Domain Adaptation for Semantic Segmentation

Official PyTorch implementation for CVPR 2024 paper:

Open-Set Domain Adaptation for Semantic Segmentation
Seun-An Choe*, Ah-Hyung Shin*, Keon-Hee Park, Jinwoo Choi$\dagger$ , and Gyeong-Moon Park$\dagger$

arXiv

How to run

Setup Enviorment

We used python 3.8.5.

python -m venv ~/venv/bus
source ~/venv/bus/bin/activate
pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
pip install mmcv-full==1.3.7  # requires the other packages to be installed first

Download the MiT-B5 ImageNet weights provided by SegFormer from their OneDrive and put them in the folder pretrained/.

Download the MobileSAM weights folder provided by MobileSAM from their OneDrive and only 'mobile_sam.pt' put them in the folder weights/.

Setup Datasets

Download the datasets from GTA5, SYNTHIA, Cityscapes Download GTA5, SYNTHIA and Cityscapes.

The final folder structure should look like this:

BUS
├── ...
├── data
│   ├── cityscapes
│   │   ├── leftImg8bit
│   │   │   ├── train
│   │   │   ├── val
│   │   ├── gtFine
│   │   │   ├── train
│   │   │   ├── val
│   ├── gta
│   │   ├── images
│   │   ├── labels

Data Preprocessing: Finally, please run the following scripts to convert the label IDs to the train IDs and to generate the class index for OSDA-SS scenario:

python tools/convert_datasets/gta_13.py data/gta --nproc 8
python tools/convert_datasets/cityscapes_13.py data/cityscapes --nproc 8

The code above is what we used in the GTA5-->Cityscapes scenario when we set the six private classes to the following. "pole", "traffic sign", "person", "rider", "truck" and "train".

Training

python run_experiments.py --config configs/mic/gtaHR2csHR_mic_hrda_512.py

Testing

sh test.sh work_dirs/run_name/

Pretrained model for GTA5->Cityscapes can be downloaded at: Link. The performance of this model is 62.81 based on the H-Score.

##Citation

@inproceedings{choe2024open,
  title={Open-Set Domain Adaptation for Semantic Segmentation},
  author={Choe, Seun-An and Shin, Ah-Hyung and Park, Keon-Hee and Choi, Jinwoo and Park, Gyeong-Moon},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={23943--23953},
  year={2024}
}

Acknowledgement

This code is heavily borrowed from MIC, MobileSAM and DACS.

About

[CVPR 2024] Open-Set Domain Adaptation for Semantic Segmentation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages