This is the official implementation for MOWA (arXiv 2024).
Kang Liao, Zongsheng Yue, Zhonghua Wu, Chen Change Loy
S-Lab, Nanyang Technological University
MOWA is a practical multiple-in-one image warping framework, particularly in computational photography, where six distinct tasks are considered. Compared to previous works tailored to specific tasks, our method can solve various warping tasks from different camera models or manipulation spaces in a single framework. It also demonstrates an ability to generalize to novel scenarios, as evidenced in both cross-domain and zero-shot evaluations.
- The first practical multiple-in-one image warping framework especially in the field of computational photography.
- We propose to mitigate the difficulty of multi-task learning by decoupling the motion estimation in both the region level and pixel level.
- A prompt learning module, guided by a lightweight point-based classifier, is designed to facilitate task-aware image warpings.
- We show that through multi-task learning, our framework develops a robust generalized warping strategy that gains improved performance across various tasks and even generalizes to unseen tasks.
Check out more visual results and interactions here.
- MOWA has been included in AI Art Weekly #80.
- 2023.04.16: The paper of the arXiv version is online.
- 2023.06.30: Release the code and pre-trained model.
- Release a demo for users to try MOWA online.
- Release an interactive interface to drag the control points and perform customized warpings.
Using the virtual environment (conda) to run the code is recommended.
conda create -n mowa python=3.8.13
conda activate mowa
pip install -r requirements.txt
We mainly explored six representative image warping tasks in this work. The datasets are derived/constructed from previous works. For the convenience of training and testing in one project, we cleaned and arranged these six types of datasets with unified structures and more visual assistance. Please refer to the category and download links in Datasets.
Download the pretrained model here and put it into the .\checkpoint
folder.
Customize the paths of checkpoint and test set, and run:
sh scripts/test.sh
The warped images and the intermediate results such as the control points and warping flow can be found in the .\results
folder. The evaluated metrics such as PSNR and SSIM are also shown with the task ID.
In the portrait correction task, the ground truth of warped image and flow is unavailable and thus the image quality metrics cannot be evaluated. Instead, the specific metric (ShapeAcc) regarding this task's purpose, i.e., correcting the face distortion, was presented. To reproduce the warping performance on portrait photos, customize the paths of checkpoint and test set, and run:
sh scripts/test_portrait.sh
The warped images can also be found in the test path.
Customize the paths of all warping training datasets in a list, and run:
sh scripts/train.sh
- MOWA-onnxrun (ONNX Runtime Implementation): https://github.com/hpc203/MOWA-onnxrun
TBD
The current version of MOWA is inspired by previous specific image warping works such as RectanglingPano, DeepRectangling, RecRecNet, PCN, Deep_RS-HM, SSPC.
@article{liao2024mowa,
title={MOWA: Multiple-in-One Image Warping Model},
author={Liao, Kang and Yue, Zongsheng and Wu, Zhonghua and Loy, Chen Change},
journal={arXiv preprint arXiv:2404.10716},
year={2024}
}
For any questions, feel free to email [email protected]
.
This project is licensed under NTU S-Lab License 1.0.