Skip to content
/ KEEP Public

[ECCV'24] Kalman-Inspired Feature Propagation for Video Face Super-Resolution

License

Notifications You must be signed in to change notification settings

jnjaby/KEEP

Repository files navigation

KEEP: Kalman-Inspired Feature Propagation for Video Face Super-Resolution

S-Lab, Nanyang Technological University 
ECCV 2024

showcase
🔥 For more results, visit our project page 🔥
⭐ If you found this project helpful to your projects, please help star this repo. Thanks! 🤗

Update

  • 2024.08: We released the initial version of the inference code and models. Stay tuned for continuous updates!
  • 2024.07: This repo is created!

Getting Started

Dependencies and Installation

  • Pytorch >= 1.7.1
  • CUDA >= 10.1
  • Other required packages in requirements.txt
# git clone this repository. Don't forget to add --recursive!!
git clone --recursive https://github.com/jnjaby/KEEP
cd KEEP

# create new anaconda env
conda create -n keep python=3.8 -y
conda activate keep

# install python dependencies
pip3 install -r requirements.txt
python basicsr/setup.py develop
conda install -c conda-forge dlib # only for face detection or cropping with dlib
conda install -c conda-forge ffmpeg

[Optional] If you forget to clone the repo with --recursive, you can update the submodule by

git submodule init
git submodule update

Quick Inference

Download Pre-trained Models

All pretrained models can also be automatically downloaded during the first inference. You can also download our pretrained models from Releases V0.1.0 to the weights folder.

Prepare Testing Data

We provide both synthetic (VFHQ) and real (collected) examples in assets/examples folder. If you would like to test your own face videos, place them in the same folder. You can also download the full synthetic and real test data from [Google Drive].

Inference

[Note] If you want to compare KEEP in your paper, please make sure the face alignment is consistent and run the following command with --has_aligned to indicate faces are already cropped and aligned. The results will be saved in the results folder.

🧑🏻 Video Face Restoration for synthetic data (cropped and aligned face)

# For cropped and aligned faces
python inference_keep.py -i=./assets/examples/synthetic_1.mp4 -o=results/ --has_aligned --save_video -s=1

🎬 Video Face Restoration for real data (in the wild)

# For whole video
# Add '--bg_upsampler realesrgan' to enhance the background regions with Real-ESRGAN
# Add '--face_upsample' to further upsample restorated face with Real-ESRGAN
# Add '--draw_box' to show the bounding box of detected faces.
python inference_keep.py -i=./assets/examples/real_1.mp4 -o=results/ --draw_box --save_video -s=1 --bg_upsampler=realesrgan

Citation

If you find our repo useful for your research, please consider citing our paper:

@InProceedings{feng2024keep,
   title     = {Kalman-Inspired FEaturE Propagation for Video Face Super-Resolution},
   author    = {Feng, Ruicheng and Li, Chongyi and Loy, Chen Change},
   booktitle = {European Conference on Computer Vision (ECCV)},
   year      = {2024}
}

License and Acknowledgement

This project is open sourced under NTU S-Lab License 1.0. Redistribution and use should follow this license. The code framework is mainly modified from CodeFormer. Please refer to the original repo for more usage and documents.

Contact

If you have any question, please feel free to contact us via [email protected].

About

[ECCV'24] Kalman-Inspired Feature Propagation for Video Face Super-Resolution

Resources

License

Stars

Watchers

Forks

Packages

No packages published