-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CLIPVisionModelWithProjection Shape Size Error #6
Comments
I can also add the output before the error: got prompt
[rgthree] Using rgthree's optimized recursive execution.
Some weights of the model checkpoint were not used when initializing UNet2DConditionModel:
['conv_norm_out.weight, conv_norm_out.bias, conv_out.weight, conv_out.bias']
loaded temporal unet's pretrained weights from E:\COMFY\ComfyUI-robe\custom_nodes\ComfyUI_Aniportrait\pretrained_model\stable-diffusion-v1-5\unet ...
Load motion module params from E:\COMFY\ComfyUI-robe\custom_nodes\ComfyUI_Aniportrait\pretrained_model\motion_module.pth
Loaded 453.20928M-parameter motion module and that I'm running diffusers 0.26.2 |
make sure the pretrained_model file is completed otherwise u'd better re-download these models,btw,pay attention to the reference image and video size, they should be all the square shape |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Unfortunately, running any of the example workflows I get the following error:
Is this related to my torch or transformers version?
I'm running transformers 4.40.2 and torch 2.3.0+cu118
Can you please help me to fix it?
The text was updated successfully, but these errors were encountered: