Skip to content

StyleCineGAN: Landscape Cinemagraph Generation using a Pre-trained StyleGAN [CVPR 2024]

License

Notifications You must be signed in to change notification settings

jeolpyeoni/StyleCineGAN

Repository files navigation

StyleCineGAN: Landscape Cinemagraph Generation using a Pre-trained StyleGAN (CVPR 2024)

This is the official PyTorch implementation of "StyleCineGAN: Landscape Cinemagraph Generation using a Pre-trained StyleGAN" (CVPR2024).

teaser

Abstract: We propose a method that can generate cinemagraphs automatically from a still landscape image using a pre-trained StyleGAN. Inspired by the success of recent unconditional video generation, we leverage a powerful pre-trained image generator to synthesize high-quality cinemagraphs. Unlike previous approaches that mainly utilize the latent space of a pre-trained StyleGAN, our approach utilizes its deep feature space for both GAN inversion and cinemagraph generation. Specifically, we propose multi-scale deep feature warping (MSDFW), which warps the intermediate features of a pre-trained StyleGAN at different resolutions. By using MSDFW, the generated cinemagraphs are of high resolution and exhibit plausible looping animation. We demonstrate the superiority of our method through user studies and quantitative comparisons with state-of-the-art cinemagraph generation methods and a video generation method that uses a pre-trained StyleGAN.


Getting Started

Environment Setup

We recommend to use Docker. Use seokg1023/vml-pytorch:vessl for the docker image.

docker pull seokg1023/vml-pytorch:vessl

All dependencies for the environment are provided in requirements.txt.

pip install -r requirements.txt

Download Checkpoints

We provide pre-trained checkpoints of StyleGAN2 and encoder networks here.
Download and unzip the checkpoint files and place them in ./pretrained_models.


Inference

We provide an inference code for the proposed MSDFW method following the GAN inversion process. Run main.py as the following example:

python main.py --img_path ./samples/0002268 --save_dir ./results

To test the method with your own data, please place the data as below:

$IMG_PATH$
    └── $FILE_NAME$
         ├── $FILE_NAME$.png
         ├── $FILE_NAME$_mask.png
         └── $FILE_NAME$_motion.npy

Acknowledgement

The code for this project was build using the codebase of StyleGAN2, pix2pixHD, FeatureStyleEncoder, DatasetGAN. The symmetric-splatting code was built on top of softmax-splatting. We are very thankful to the authors of the corresponding works for releasing their code.


Citation

@InProceedings{Choi_2024_CVPR,
    author    = {Choi, Jongwoo and Seo, Kwanggyoon and Ashtari, Amirsaman and Noh, Junyong},
    title     = {StyleCineGAN: Landscape Cinemagraph Generation using a Pre-trained StyleGAN},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2024},
    pages     = {7872-7881}
}

About

StyleCineGAN: Landscape Cinemagraph Generation using a Pre-trained StyleGAN [CVPR 2024]

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages