Skip to content

PyTorch implementation of style transfer models (CycleGAN, CartoonGAN, AnimeGAN)

Notifications You must be signed in to change notification settings

Snailpong/style_transfer_implementation

Repository files navigation

style_transfer_implementation

PyTorch implementation of style transfer (landscape cartoonization) models

Models

Dependencies

  • Pytorch
  • torchvision
  • numpy
  • PIL
  • OpenCV
  • tqdm
  • click

Usage

  1. Download dataset
  • CycleGAN: Link
  • CartoonGAN, AnimeGAN: Link
  1. Place data

e.g.

.
└── data
    ├── summer2winter_yosemite
    |   ├── trainA
    |   ├── trainB
    |   ├── testA
    |   └── testB
    └── cartoon_dataset
        ├── photo
        |   ├── 0.jpg
        |   └── ...
        ├── cartoon
        |   ├── 0_1.jpg
        |   └── ...
        ├── cartoon_smoothed
        |   ├── 0_1.jpg
        |   └── ...
        └── val
            ├── 1.jpg
            └── ...
  1. Train
  • CycleGAN: python train_cyclegan.py

  • CartoonGAN: python train_cartoongan.py

  • AnimeGAN: python train_animegan.py

  • arguments

    • dataset_type (only for CycleGAN): folder name to use for dataset (e.g. summer2winter_yosemite)
    • load_model: True/False
    • cuda_visible: CUDA_VISIBLE_DEVICES (e.g. 1)
  1. Test
  • CycleGAN: python test_cyclegan.py

  • CartoonGAN: python test_cartoongan.py

  • AnimeGAN: python test_cartoongan.py --model_name=animegan

  • arguments

    • dataset_type (only for CycleGAN)
    • model_type (only for CycleGAN): x2y/y2x
    • image_path: folder path to convert the images
    • cuda_visible
    • model_name (only for CartoonGAN): cartoongan/animegan
    • is_crop (only for CartoonGAN): True/False. crop and resize image to (256, 256)

Results

  • We trained the models with Shinkai style
PhotoCartoonGANAnimeGAN
2014-09-08 05_31_48 2014-09-08 05_31_48 2014-09-08 05_31_48
2014-12-07 05_00_46 2014-12-07 05_00_46 2014-12-07 05_00_46
11 11 11
16 16 16
17 17 17
20 20 20
30 30 30
55 55 55

Observation & Discussion

  • AnimeGAN kept the original texture and color than CartoonGAN.
  • CartoonGAN made good use of the unique black edge of cartoons.
  • CartoonGAN and AnimeGAN were close to the texture of TVD/Movie, respectively.
  • AnimeGAN did not reduce the Discriminator adversial loss from certain point.
  • In CartoonGAN, the color expressions changed as the epoch increases, and was unified for all generated images.
  • The performance was better when using generated cartoon data by cropping the high-resolution images than resizing them.

Code Reference

About

PyTorch implementation of style transfer models (CycleGAN, CartoonGAN, AnimeGAN)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages