-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
it works NOT with any NOVEL DATA!!!!! it ONLY works with EXAMPLES #8
Comments
Hi! Our current model does not support real-world images/applications well, as you have tried 🙁. It might caused by that our training dataset is collected from other task-specific image restoration projects (you can find them in our paper, Appendix A) which have relatively smaller image pairs and thus lack of good generalization ability for your custom images. Actually, we had mentioned this problem in our document notice section and also provided more examples from our test dataset to illustrate the method. If you want the model to be more powerful and reliable for real-world common pictures, one possible solution is to collect more images from different sources and adopt the training strategy of Real-ESRGAN or DiffBIR. Again, this project is mainly written for academic purposes. The Universal Image Restoration is a new concept and worth more exploration and attention. We are really happy to know that many people like this idea and want to try their own images. Thank you! We will keep improving our model and make it more practical in our future work! |
So it is just overfitting on training data? I tried paint some white lines on a random image and it didn't work. |
It should work on face inpainting since we only use CelebaHQ-256 in training. Moreover, our model also works well on testing images and we are not aiming to overfit the training data. For blurring, we use GoPro which is a synthetic dataset in which each image is generated by averaging several neighboring frames to simulate the motion blurry. As required, I will try to collect more image pairs from different datasets to make our model capable of processing more real degradation types. BTW, we also found that directly resizing input images will lead a poor performance for most tasks. We will also try to add the resize step into the training to add the generalization ability. |
That what I tried: I manually scaled images to the 256x256 but any scaling didn't help at all. |
@Denys88 Yes, for face inpainting you can choose images with size 256x256. I manually downloaded an image and added the mask to it, the result is below: The mask-adding code is:
For the blurry images, I don't know how to choose them from the internet. Maybe I should retrain a model with a real motion blur dataset (but the paired motion blur datasets are really really difficult to find or make, that's why we always use synthetic images). |
Hi, our model for inpainting is trained on the Celeba-HQ face dataset (with only one aligned face in an image). You can find more information here: https://www.kaggle.com/datasets/badasstechie/celebahq-resized-256x256 |
@Algolzw Any future plan to provide pre-trained models for real-life use cases? |
@LukaGiorgadze I will provide a slightly better weight (for resized images) later this month. |
did you publish a wrong checkpoint?
or this is a bogus project ?
The text was updated successfully, but these errors were encountered: