Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inferrence errors, help! #21

Open
RYHSmmc opened this issue Jan 2, 2024 · 3 comments
Open

Inferrence errors, help! #21

RYHSmmc opened this issue Jan 2, 2024 · 3 comments

Comments

@RYHSmmc
Copy link

RYHSmmc commented Jan 2, 2024

Traceback (most recent call last):
File "/cpfs/user/mingcanma/workspace/code/openseg/APE-main/demo/demo_lazy.py", line 135, in
demo = VisualizationDemo(cfg, args=args)
File "/cpfs/user/mingcanma/workspace/code/openseg/APE-main/demo/predictor_lazy.py", line 177, in init
self.predictor = DefaultPredictor(cfg)
File "/cpfs/user/mingcanma/workspace/code/openseg/APE-main/ape/engine/defaults.py", line 56, in init
self.model = instantiate(cfg.model)
File "/cpfs/user/mingcanma/workspace/code/openseg/APE-main/gitFile/detectron2-main/detectron2/config/instantiate.py", line 67, in instantiate
cfg = {k: instantiate(v) for k, v in cfg.items()}
File "/cpfs/user/mingcanma/workspace/code/openseg/APE-main/gitFile/detectron2-main/detectron2/config/instantiate.py", line 67, in
cfg = {k: instantiate(v) for k, v in cfg.items()}
File "/cpfs/user/mingcanma/workspace/code/openseg/APE-main/gitFile/detectron2-main/detectron2/config/instantiate.py", line 83, in instantiate
return cls(**cfg)
File "/cpfs/user/mingcanma/workspace/code/openseg/APE-main/ape/modeling/text/clip_wrapper_eva01.py", line 19, in init
self.net, _ = build_eva_model_and_transforms(clip_model, pretrained=cache_dir)
File "/cpfs/user/mingcanma/workspace/code/openseg/APE-main/ape/modeling/text/eva01_clip/eva_clip.py", line 165, in build_eva_model_and_transforms
model = create_model(
File "/cpfs/user/mingcanma/workspace/code/openseg/APE-main/ape/modeling/text/eva01_clip/eva_clip.py", line 110, in create_model
load_checkpoint(model, pretrained)
File "/cpfs/user/mingcanma/workspace/code/openseg/APE-main/ape/modeling/text/eva01_clip/eva_clip.py", line 82, in load_checkpoint
state_dict = load_state_dict(checkpoint_path, model_key=model_key)
File "/cpfs/user/mingcanma/workspace/code/openseg/APE-main/ape/modeling/text/eva01_clip/eva_clip.py", line 69, in load_state_dict
checkpoint = torch.load(checkpoint_path, map_location=map_location)
File "/opt/conda/lib/python3.10/site-packages/torch/serialization.py", line 771, in load
with _open_file_like(f, 'rb') as opened_file:
File "/opt/conda/lib/python3.10/site-packages/torch/serialization.py", line 270, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/opt/conda/lib/python3.10/site-packages/torch/serialization.py", line 251, in init
super(_open_file, self).init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'models/BAAI/EVA/eva_clip_psz14.pt'
munmap_chunk(): invalid pointer
Aborted (core dumped)

@shenyunhang
Copy link
Owner

It seems that the train.init_checkpoint is not set in the configure, and the model uses the default path models/BAAI/EVA/eva_clip_psz14.pt, which is not found.

Please download the pre-trained model and add the path to train.init_checkpoint .

@RYHSmmc
Copy link
Author

RYHSmmc commented Jan 3, 2024

python demo/demo_lazy.py
--config-file configs/LVISCOCOCOCOSTUFF_O365_OID_VGR_SA1B_REFCOCO_GQA_PhraseCut_Flickr30k/ape_deta/ape_deta_vitl_eva02_clip_vlf_lsj1024_cp_16x4_1080k.py
--input image1.jpg image2.jpg image3.jpg
--output ./out
--confidence-threshold 0.1
--text-prompt 'person,car,chess piece of horse head'
--with-box
--with-mask
--with-sseg
--opts
train.init_checkpoint=models/model_final.pth
model.model_vision.select_box_nums_for_evaluation=500
model.model_vision.text_feature_bank_reset=True
..................
I have set train.init_checkpoint to the path of APE-D. But encounter the same error: RuntimeError: Pretrained weights (models/QuanSun/EVA-CLIP/EVA02_CLIP_E_psz14_plus_s9B.pt) not found for model EVA02-CLIP-bigE-14-plus.Available pretrained tags (['eva', 'eva02', 'eva_clip', 'eva02_clip'].

@shenyunhang
Copy link
Owner

shenyunhang commented Jan 3, 2024

Thank you for the feedback.
It tries to load the text model from EVA02_CLIP_E_psz14_plus_s9B.pt.
You can set model.model_language.cache_dir="" to disable it, and the text model will be initialized from APE-D.
We will update the README.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants