Yuxi Wang, Jian Liang, Zhaoxiang Zhang,
In this work, we propose ATP, a novel source data-free adaptation framework for semantic segmentation tasks. The proposed method consists three steps: curriculum feature alignment, complementary self-training, and information propogation. Extensive experiments have demonstrate that the prpoposed ATP boosts the performance of source data-free domain adaptation tasks.
Following the repo of SePiCo, the environment is conducted as following.
This code is implemented with Python 3.8.5
and PyTorch 1.7.1
on CUDA 11.0
.
To try out this project, it is recommended to set up a virtual environment first:
# create and activate the environment
conda create --name ATP -y python=3.8.5
conda activate ATP
# install the right pip and dependencies for the fresh python
conda install -y ipython pip
Then, the dependencies can be installed by:
# install required packages
pip install -r requirements.txt
# install mmcv-full, this command compiles mmcv locally and may take some time
pip install mmcv-full==1.3.7 # requires other packeges to be installed first
Alternatively, the mmcv-full
package can be installed faster with official pre-built packages, for instance:
# another way to install mmcv-full, faster
pip install mmcv-full==1.3.7 -f https://download.openmmlab.com/mmcv/dist/cu110/torch1.7.0/index.html
The environment is now fully prepared.
- GTAV: Download all zipped images, along with their zipped labels, from here and extract them to a custom directory.
- Cityscapes: Download leftImg8bit_trainvaltest.zip and gtFine_trainvaltest.zip from here and extract them to a custom directory.
- ACDC: Download rgb_anon_trainvaltest.zip and gt_trainval.zip from here and extract them to a custom directory.
Symlink the required datasets:
ln -s /path/to/gta5/dataset data/gta
ln -s /path/to/cityscapes/dataset data/cityscapes
Perform preprocessing to convert label IDs to the train IDs and gather dataset statistics:
python tools/convert_datasets/gta.py data/gta --nproc 8
python tools/convert_datasets/cityscapes.py data/cityscapes --nproc 8
Ultimately, the data structure should look like this:
ATP
├── ...
├── data
│ ├── cityscapes
│ │ ├── gtFine
│ │ ├── leftImg8bit
│ ├── dark_zurich
│ │ ├── corresp
│ │ ├── gt
│ │ ├── rgb_anon
│ ├── gta
│ │ ├── images
│ │ ├── labels
├── ...
Below, we provide checkpoints of ATP for different benchmarks.
variants | model name | mIoU | checkpoint download |
---|---|---|---|
Step A | gta2cityscapes_daformer_aligning.pth | 54.0 | ATP for GTA5 → Cityscapes (Alinging) |
Step T | gta2cityscapes_daformer_teaching.pth | 62.4 | ATP for GTA5 → Cityscapes (Teaching) |
Step P | gta2cityscapes_daformer_propagating.pth | 64.0 | ATP for GTA5 → Cityscapes (Propagating) |
variants | model name | mIoU | checkpoint download |
---|---|---|---|
Step A | syn2cityscapes_daformer_aligning.pth | 56.5 | ATP for SYNTHIA → Cityscapes (Alinging) |
Step T | syn2cityscapes_daformer_teaching.pth | 64.9 | ATP for SYNTHIA → Cityscapes (Teaching) |
Step P | syn2cityscapes_daformer_propagating.pth | 66.6 | ATP for SYNTHIA → Cityscapes (Propagating) |
variants | model name | mIoU | checkpoint download |
---|---|---|---|
Step A | cityscapes2ACDC_Night_daformer_aligning.pth | 28.4 | ATP for Cityscapes → ACDC (Night) (Alinging) |
Step T | cityscapes2ACDC_Night_daformer_teaching.pth | 34.4 | ATP for Cityscapes → ACDC (Night) (Teaching) |
Step P | cityscapes2ACDC_Night_daformer_propagating.pth | 35.8 | ATP for Cityscapes → ACDC (Night) (Propagating) |
Step A | cityscapes2ACDC_Fog_daformer_aligning.pth | 60.9 | ATP for Cityscapes → ACDC (Fog) (Alinging) |
Step T | cityscapes2ACDC_Fog_daformer_teaching.pth | 70.1 | ATP for Cityscapes → ACDC (Fog) (Teaching) |
Step P | cityscapes2ACDC_Fog_daformer_propagating.pth | 72.1 | ATP for Cityscapes → ACDC (Fog) (Propagating) |
Step A | cityscapes2ACDC_Rain_daformer_aligning.pth | 53.0 | ATP for Cityscapes → ACDC (Rain) (Alinging) |
Step T | cityscapes2ACDC_Rain_daformer_teaching.pth | 60.9 | ATP for Cityscapes → ACDC (Rain) (Teaching) |
Step P | cityscapes2ACDC_Rain_daformer_propagating.pth | 63.0 | ATP for Cityscapes → ACDC (Rain) (Propagating) |
Step A | cityscapes2ACDC_Snow_daformer_aligning.pth | 54.6 | ATP for Cityscapes → ACDC (Snow) (Alinging) |
Step T | cityscapes2ACDC_Snow_daformer_teaching.pth | 62.3 | ATP for Cityscapes → ACDC (Snow) (Teaching) |
Step P | cityscapes2ACDC_Snow_daformer_propagating.pth | 62.6 | ATP for Cityscapes → ACDC (Snow) (Propagating) |
To begin with, download SegFormer's official MiT-B5 weights (i.e., mit_b5.pth
) pretrained on ImageNet-1k from here and put it into a new folder ./pretrained
.
The training entrance is at run_experiments.py
. To examine the setting for a specific task, please take a look at experiments.py
for more details. Generally, the training script is given as:
python run_experiments.py --exp <exp_id>
To evaluate the pretrained models on Cityscapes, please run as follows:
python -m tools.test /path/to/config /path/to/checkpoint --eval mIoU
This project is based on the following open-source projects. We thank their authors for making the source code publicly available.
For help and issues associated with ATP, or reporting a bug, please open a [GitHub Issues], or feel free to contact [email protected].