Skip to content

YupeiLin2388/Exploring-Negatives-in-Contrastive-Learning-for-Unpaired-Image-to-Image-Translation

Repository files navigation

Exploring Negatives in Contrastive Learning for Unpaired Image-to-Image Translation





We provide our PyTorch implementation of Exploring Negatives in Contrastive Learning for Unpaired Image-to-Image Translation(PUT).

In this paper, we propose a novel model called PUT for unpaired image-to-image translation. Compared with the previous contrastive learning methods, our proposed PUT is stable to learn the information between the corresponding patches, leading to a more effective contrast learning system

Example Results

Unpaired Image-to-Image Translation



Single Image Unpaired Translation

Getting Started

Installation

  • Clone this repo:
git clone https://github.com/YupeiLin2388/Exploring-Negatives-in-Contrastive-Learning-for-Unpaired-Image-to-Image-Translation PUT
cd PUT
  • Install PyTorch and other dependencies (e.g., torchvision, visdom, dominate, gputil).

    For pip users, please type the command pip install -r requirements.txt

Please refer to the original CUT and CycleGAN to download the horse2zebra and CityScapes datasets.

Training

Horse2Zerba

python train.py --dataroot ./datasets/horse2zebra --name h2z_PUT5 --choose_patch 5 --batch_size 1 --gpu_id 0

CityScapes

python train.py   --name citys_PUT5   --choose_patch 5 --batch_size 1 --dataroot ./datasets/cityscapes/ --direction BtoA --gpu_id 0

Single Image Unpaired Training

python train.py --model sincut --name sinPUT5 --dataroot ./datasets/single_image_monet_etretat --choose_patch 5

Testing

Horse2Zerba

python test.py --dataroot ./datasets/horse2zebra --name h2z_pretrained 

CityScapes

python test.py  --dataroot ./datasets/cityscapes/ --direction BtoA  --name CityScapes_pretrained 

Pretrained Models

Download the pre-trained models using the following links and put them undercheckpoints/ directory.

horse2zebra:google drive

CityScape :google drive

image2monet:google drive

Evaluate

Horse2Zerba

We referred to the code of F-LSeSim and run test_fid.py to calculate the FID value for each epoch. We stored the results of each epoch in result.csv.

python test_fid.py --dataroot ./datasets/horse2zebra --name h2z_pretrained --num_test 500   --gpu_id 0

CityScapes

For CityScapes dataset, we frist resize 256*128 then calculate the FID

python test_fid.py --name citys_PUT5 --dataroot ./datasets/cityscapes/ --direction BtoA --num_test 500  --aspect_ratio 2.0 --gpu_id 0

For mIoU computation, we use drn-22.

python3 segment.py test -d <data_folder> -c 19 --arch drn_d_22     --pretrain ./checkpoint/drn_d_22_cityscapes.pth --phase test --batch-size 1

Citation

If you use this code for your research, please cite our paper.

@inproceedings{lin2022exploring,
  title={Exploring negatives in contrastive learning for unpaired image-to-image translation},
  author={Lin, Yupei and Zhang, Sen and Chen, Tianshui and Lu, Yongyi and Li, Guangping and Shi, Yukai},
  booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
  pages={1186--1194},
  year={2022}
}

Acknowledge

Our code is developed based on CUT and F-LSeSim , we also thank , drn for mIoU computation.

About

Exploring Negatives in Contrastive Learning for Unpaired Image-to-Image Translation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages