Skip to content

yxiwang/ATP

Repository files navigation


A Curriculum-style Self-training Approach for Source-Free Semantic Segmentation

Yuxi Wang, Jian Liang, Zhaoxiang Zhang,

Architecture

In this work, we propose ATP, a novel source data-free adaptation framework for semantic segmentation tasks. The proposed method consists three steps: curriculum feature alignment, complementary self-training, and information propogation. Extensive experiments have demonstrate that the prpoposed ATP boosts the performance of source data-free domain adaptation tasks.

Installation

Following the repo of SePiCo, the environment is conducted as following.

This code is implemented with Python 3.8.5 and PyTorch 1.7.1 on CUDA 11.0.

To try out this project, it is recommended to set up a virtual environment first:

# create and activate the environment
conda create --name ATP -y python=3.8.5
conda activate ATP

# install the right pip and dependencies for the fresh python
conda install -y ipython pip

Then, the dependencies can be installed by:

# install required packages
pip install -r requirements.txt

# install mmcv-full, this command compiles mmcv locally and may take some time
pip install mmcv-full==1.3.7  # requires other packeges to be installed first

Alternatively, the mmcv-full package can be installed faster with official pre-built packages, for instance:

# another way to install mmcv-full, faster
pip install mmcv-full==1.3.7 -f https://download.openmmlab.com/mmcv/dist/cu110/torch1.7.0/index.html

The environment is now fully prepared.

Datasets Preparation

Download Datasets

  • GTAV: Download all zipped images, along with their zipped labels, from here and extract them to a custom directory.
  • Cityscapes: Download leftImg8bit_trainvaltest.zip and gtFine_trainvaltest.zip from here and extract them to a custom directory.
  • ACDC: Download rgb_anon_trainvaltest.zip and gt_trainval.zip from here and extract them to a custom directory.

Setup Datasets

Symlink the required datasets:

ln -s /path/to/gta5/dataset data/gta
ln -s /path/to/cityscapes/dataset data/cityscapes

Perform preprocessing to convert label IDs to the train IDs and gather dataset statistics:

python tools/convert_datasets/gta.py data/gta --nproc 8
python tools/convert_datasets/cityscapes.py data/cityscapes --nproc 8

Ultimately, the data structure should look like this:

ATP
├── ...
├── data
│   ├── cityscapes
│   │   ├── gtFine
│   │   ├── leftImg8bit
│   ├── dark_zurich
│   │   ├── corresp
│   │   ├── gt
│   │   ├── rgb_anon
│   ├── gta
│   │   ├── images
│   │   ├── labels
├── ...

Checkpoints

Below, we provide checkpoints of ATP for different benchmarks.

GTA5 → Cityscapes (DAFormer based)

variants model name mIoU checkpoint download
Step A gta2cityscapes_daformer_aligning.pth 54.0 ATP for GTA5 → Cityscapes (Alinging)
Step T gta2cityscapes_daformer_teaching.pth 62.4 ATP for GTA5 → Cityscapes (Teaching)
Step P gta2cityscapes_daformer_propagating.pth 64.0 ATP for GTA5 → Cityscapes (Propagating)

SYNTHIA → Cityscapes (DAFormer based)

variants model name mIoU checkpoint download
Step A syn2cityscapes_daformer_aligning.pth 56.5 ATP for SYNTHIA → Cityscapes (Alinging)
Step T syn2cityscapes_daformer_teaching.pth 64.9 ATP for SYNTHIA → Cityscapes (Teaching)
Step P syn2cityscapes_daformer_propagating.pth 66.6 ATP for SYNTHIA → Cityscapes (Propagating)

Cityscapes → ACDC (DAFormer based)

variants model name mIoU checkpoint download
Step A cityscapes2ACDC_Night_daformer_aligning.pth 28.4 ATP for Cityscapes → ACDC (Night) (Alinging)
Step T cityscapes2ACDC_Night_daformer_teaching.pth 34.4 ATP for Cityscapes → ACDC (Night) (Teaching)
Step P cityscapes2ACDC_Night_daformer_propagating.pth 35.8 ATP for Cityscapes → ACDC (Night) (Propagating)
Step A cityscapes2ACDC_Fog_daformer_aligning.pth 60.9 ATP for Cityscapes → ACDC (Fog) (Alinging)
Step T cityscapes2ACDC_Fog_daformer_teaching.pth 70.1 ATP for Cityscapes → ACDC (Fog) (Teaching)
Step P cityscapes2ACDC_Fog_daformer_propagating.pth 72.1 ATP for Cityscapes → ACDC (Fog) (Propagating)
Step A cityscapes2ACDC_Rain_daformer_aligning.pth 53.0 ATP for Cityscapes → ACDC (Rain) (Alinging)
Step T cityscapes2ACDC_Rain_daformer_teaching.pth 60.9 ATP for Cityscapes → ACDC (Rain) (Teaching)
Step P cityscapes2ACDC_Rain_daformer_propagating.pth 63.0 ATP for Cityscapes → ACDC (Rain) (Propagating)
Step A cityscapes2ACDC_Snow_daformer_aligning.pth 54.6 ATP for Cityscapes → ACDC (Snow) (Alinging)
Step T cityscapes2ACDC_Snow_daformer_teaching.pth 62.3 ATP for Cityscapes → ACDC (Snow) (Teaching)
Step P cityscapes2ACDC_Snow_daformer_propagating.pth 62.6 ATP for Cityscapes → ACDC (Snow) (Propagating)

Training

To begin with, download SegFormer's official MiT-B5 weights (i.e., mit_b5.pth) pretrained on ImageNet-1k from here and put it into a new folder ./pretrained.

The training entrance is at run_experiments.py. To examine the setting for a specific task, please take a look at experiments.py for more details. Generally, the training script is given as:

python run_experiments.py --exp <exp_id>

Evaluation

Evaluation on Cityscapes

To evaluate the pretrained models on Cityscapes, please run as follows:

python -m tools.test /path/to/config /path/to/checkpoint --eval mIoU

Acknowledgments

This project is based on the following open-source projects. We thank their authors for making the source code publicly available.

Contact

For help and issues associated with ATP, or reporting a bug, please open a [GitHub Issues], or feel free to contact [email protected].

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published