Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The following model_kwargs are not used by the model: ['encoder_hidden_states', 'encoder_attention_mask'] #121

Open
osi1880vr opened this issue Dec 16, 2022 · 4 comments

Comments

@osi1880vr
Copy link

osi1880vr commented Dec 16, 2022

I keep getting this error, I migrated from original BLIP to the new REPO here but the error keeps following.
I did remove all existing models so it would download fresh copies to prevent me having wrong models, did not help, I try your example code kinda close to what its in your example but still it failes. Funny thing is it did work weeks ago and now it keeps failing so it might still be an issue with my box but I fail to find it, could you tell which direction I get shoot at?

I try this

raw_img = Image.open("00000-0-1.png").convert("RGB")

# setup device to use
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

vis_processor = load_processor("blip_image_eval").build(image_size=384)

model_type= "BLIP_large"

if model_type.startswith("BLIP"):
    blip_type = model_type.split("_")[1].lower()
    model = load_model_cache(
        "blip_caption",
        model_type=f"{blip_type}_coco",
        is_eval=True,
        device=device,
    )

use_beam = False #did try True either but same result
img = vis_processor(raw_img).unsqueeze(0).to(device)
captions = generate_caption(
    model=model, image=img, use_nucleus_sampling=not use_beam
)
@oeatekha
Copy link

I am having a similar error, did you ever find a solution?

@LiJunnan1992
Copy link
Contributor

@dxli94 could you take a look at this issue?

@dxli94
Copy link

dxli94 commented Feb 10, 2023

Hi @osi1880vr , @oeatekha ,

This happens between specific versions of HF transformers. For transformers version >=4.15.0,<4.22.0, and >=4.25.0, the issue is not happening.

If you have to stick with otherwise versions, please see explanation and fixes as outlined here: huggingface/transformers#19290 (comment)

@osi1880vr
Copy link
Author

Im on transformers 4.25.1 now and the issue is gone, thanks a lot for the help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants