Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use clip to load local model #232

Open
tian-003 opened this issue Aug 23, 2023 · 1 comment
Open

Use clip to load local model #232

tian-003 opened this issue Aug 23, 2023 · 1 comment

Comments

@tian-003
Copy link

There is an error when I use clip to load the local model, my code is as follows:
.map('img', 'vec', ops.image_text_embedding.clip(model_name='clip_vit_base_patch16', modality='image', checkpoint_path='./model/clip-vit-base-patch16/pytorch_model.bin').

The error that occurs is as follows:
File "/root/.towhee/operators/image-text-embedding/clip/versions/main/clip.py", line 108, in init
self.model = Model(real_name, modality, checkpoint_path, device)
File "/root/anaconda3/envs/towhee_env/lib/python3.7/site-packages/towhee/runtime/runtime_conf.py", line 88, in _decorated
return model(*args, **kwargs)
File "/root/.towhee/operators/image-text-embedding/clip/versions/main/clip.py", line 79, in init
self.model = create_model(model_name, modality, checkpoint_path, device)
File "/root/.towhee/operators/image-text-embedding/clip/versions/main/clip.py", line 44, in create_model
hf_clip_config = CLIPModel. from_config(model_name)

AttributeError: type object 'CLIPModel' has no attribute 'from_config'

How to solve this problem? How do i load local models?

@0xac9527
Copy link

0xac9527 commented Sep 8, 2023

change load clipmodel codes in clip.py (/root/.towhee/operators/image-text-embedding/clip/versions/main/clip.py)
e.g. hf_clip_config = CLIPModel.from_config(model_name) - > hf_clip_model = CLIPModel.from_pretrained(checkpoint_path)

that works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants