This is the official PyTorch implementation of our paper: "TSSAT: Two-Stage Statistics-Aware Transformation for Artistic Style Transfer". (ACM MM 2023)
Artistic style transfer aims to create new artistic images by rendering a given photograph with the target artistic style. Existing methods learn styles simply based on global statistics or local patches, lacking careful consideration of the drawing process in practice. Consequently, the stylization results either fail to capture abundant and diversified local style patterns, or contain undesired semantic information of the style image and deviate from the global style distribution. To address this issue, we imitate the drawing process of humans and propose a Two-Stage Statistics-Aware Transformation (TSSAT) module, which first builds the global style foundation by aligning the global statistics of content and style features and then further enriches local style details by swapping the local statistics (instead of local features) in a patch-wise manner, significantly improving the stylization effects. Moreover, to further enhance both content and style representations, we introduce two novel losses: an attention-based content loss and a patch-based style loss, where the former enables better content preservation by enforcing the semantic relation in the content image to be retained during stylization, and the latter focuses on increasing the local style similarity between the style and stylized images. Extensive experiments verify the effectiveness of our method.
We recommend the following configurations:
- python 3.8
- PyTorch 1.8.0
- CUDA 11.1
- Download the content dataset: MS-COCO.
- Download the style dataset: WikiArt.
- Download the pre-trained VGG-19 model.
- Run the following command:
python train.py --content_dir /data/train2014 --style_dir /data/WikiArt/train
- Put your trained model to model/ folder.
- Put some sample photographs to content/ folder.
- Put some artistic style images to style/ folder.
- Run the following command:
python Eval.py --content content/1.jpg --style style/1.jpg
We provide the pre-trained model in link.
We compare our model with some existing artistic style transfer methods, including AdaIN, WCT, Avatar-Net, SANet, ArtFlow, IEST, AdaAttN, and StyTr2.
We refer to some codes from AdaIN. Thanks for both their paper and code.