Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can you resolve the plaid pattern in images? #20

Open
Black3rror opened this issue Dec 1, 2020 · 2 comments
Open

Can you resolve the plaid pattern in images? #20

Black3rror opened this issue Dec 1, 2020 · 2 comments
Assignees

Comments

@Black3rror
Copy link

Hello and thank you for sharing your good repository.
With attention, I can see a plaid pattern in reconstructed images which I believe is the result of patching the image (The border of reconstructed patches has a little less quality). Do you know any way to resolve that (maybe overlapping patches (which may increase the compression size))?

@alexandru-dinu
Copy link
Owner

alexandru-dinu commented Dec 1, 2020

Hello,

Thank you for your interest!
You are correct, the reconstructed image will have these artifacts consisting of noisy edges where patches touch each other. To circumvent this, in a sort-of hacky manner, I have devised smoothing.py which performs linear interpolation at the borders, making the image look clearer.

This is an open problem. A possible cause, in my opinion, is that, given an image patch, the network does not have any explicit information regarding the surrounding patches, therefore not knowing how to treat that particular pixels on the border.

I have started to brush-up the project, so far ensuring that training works with the latest PyTorch (1.7.0). I plan to devise some experiments in order to fix the patching issues, but feel free to contribute if you have an idea!

Keep an eye on #17 for further updates!

All the best,
Alex

@Black3rror
Copy link
Author

Black3rror commented Dec 1, 2020

Thanks for explanation. I think the cause you described is correct, in other words, latent variables mainly affect their responsive pixels in the output but they have a wide receptive field, so neighbor patches latent variables affect the border pixels in current patch. Let me know if I'm wrong.

One possible way is to not dividing the image to patches and sweep the image with the conv network. I'm doing something similar to what you've done using 32x32 patches as training data and tried the method. The result showed a different kind of artifact which I believe is because the padding was a significant part of training data but in action there is not a lot of padding in a high resolution image. Because you trained your network on 128x128 patches you should not see this artifact a lot. What do you think about it?

Best regards,
Amin

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants