Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The result is not good #5

Open
D-Mad opened this issue Feb 20, 2023 · 3 comments
Open

The result is not good #5

D-Mad opened this issue Feb 20, 2023 · 3 comments

Comments

@D-Mad
Copy link

D-Mad commented Feb 20, 2023

Your work is awesome!

I tried running Video Harmonization on your pretrained model but the result is not good

My FG is the audience sitting in the stands, the number is large
My BG is the audience's row of seats
My mask is already detached

But it seems that the result is not good, is it because of the datasets?

@ZHKKKe
Copy link
Owner

ZHKKKe commented Feb 24, 2023

Thanks for your attention.
I think the problem is caused by the dataset (i.e., domain gap), as the released model is trained on the iHarmony dataset only.
Try to fine-tune the model on your own data may help.

@cm06137
Copy link

cm06137 commented Mar 6, 2023

Hi, thanks for your work!
I just retrained your model and uesd the iHarmony4 datasets.
I transform the final checkpoint_60.ckpt to checkpoint_60.pth only using the "model" parameter in checkpoint_60.ckpt.
And then I use the val_harmonizer.py to test the result of harmonization but the result is so bad, is it because without data augmentation as your paper mentioned?

Thanks ahead for your help!

Here is my validation result:


  MSE PSNR SSIM fMSE  
ALL 1534.1038 19.8855 0.8698 15725.2368
HCOCO 1097.0617 20.9867 0.9123 16839.8725
HAdobe5k 1904.1659 18.7368 0.8112 12781.4370
HFlickr 2911.8014 16.8332 0.7990 17104.4307
Hday2night 1021.2000 22.0777 0.8908 19053.4366

@xyxiaoAk
Copy link

xyxiaoAk commented Dec 25, 2023

Hi, thanks for your work! I just retrained your model and uesd the iHarmony4 datasets. I transform the final checkpoint_60.ckpt to checkpoint_60.pth only using the "model" parameter in checkpoint_60.ckpt. And then I use the val_harmonizer.py to test the result of harmonization but the result is so bad, is it because without data augmentation as your paper mentioned?

Thanks ahead for your help!

Here is my validation result:

  MSE PSNR SSIM fMSE  
ALL 1534.1038 19.8855 0.8698 15725.2368
HCOCO 1097.0617 20.9867 0.9123 16839.8725
HAdobe5k 1904.1659 18.7368 0.8112 12781.4370
HFlickr 2911.8014 16.8332 0.7990 17104.4307
Hday2night 1021.2000 22.0777 0.8908 19053.436

Hi, how did you convert? I am also pulling out the model part, but it reports errors when testing :

RuntimeError: Error(s) in loading state_dict for Harmonizer:
        Missing key(s) in state_dict: "backbone._blocks.0._depthwise_conv.weight", "backbone._blocks.0._bn1.weight", "backbone._blocks.0._bn1.bias", "bac
kbone._blocks.0._bn1.running_mean", "backbone._blocks.0._bn1.running_var", "backbone._blocks.0._se_reduce.weight", "backbone._blocks.0._se_reduce.bias", 
"backbone._blocks.0._se_expand.weight", "backbone._blocks.0._se_expand.bias", "backbone._blocks.0._project_conv.weight", "backbone._blocks.0._bn2.weight"
, "backbone._blocks.0._bn2.bias", "backbone._blocks.0._bn2.running_mean", "backbone._blocks.0._bn2.running_var", "backbone._blocks.1._expand_conv.weight"
, "backbone._blocks.1._bn0.weight", "backbone._blocks.1._bn0.bias", "backbone._blocks.1._bn0.running_mean", "backbone._blocks.1._bn0.running_var", "backb
one._blocks.1._depthwise_conv.weight", "backbone._blocks.1._bn1.weight",......................

My code is as follows:

import torch

# 指定 checkpoint 文件的路径
checkpoint_path = 'checkpoint_60.ckpt'

checkpoint = torch.load(checkpoint_path)
# 提取模型的参数
model_state_dict = checkpoint['model']

torch.save(model_state_dict, "checkpoint_60.pth")

Thanks for your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants