Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

无法正确load配置文件;Can't load config file correctly #26

Open
CallMeFrozenBanana opened this issue Jan 17, 2024 · 4 comments

Comments

@CallMeFrozenBanana
Copy link

CallMeFrozenBanana commented Jan 17, 2024

你好,我根据readme提供的路径下载了APE-D模型与配置文件
运行脚本如下:
python demo/demo_lazy.py \ --config-file configs/LVISCOCOCOCOSTUFF_O365_OID_VGR_SA1B_REFCOCO_GQA_PhraseCut_Flickr30k/ape_deta/ape_deta_vitl_eva02_clip_vlf_lsj1024_cp_16x4_1080k.py \ --input "./inputs/imgs_0083_339B1005-33B9-4e3c-B4A8-2D7EC23720C4_raw.jpg" \ --output "./seg_results" \ --confidence-threshold 0.1 \ --text-prompt 'car' \ --with-box \ --with-mask \ --with-sseg \ --opts \ train.init_checkpoint="./checkpoints/APE-D/model_final.pth" \ model.model_language.cache_dir="" \ model.model_vision.select_box_nums_for_evaluation=500 \ model.model_vision.text_feature_bank_reset=True \ model.model_vision.backbone.net.xattn=False \ model.model_vision.transformer.encoder.pytorch_attn=True \ model.model_vision.transformer.decoder.pytorch_attn=True \
但获得了错误的结果,以为是config-file错误导致

@CallMeFrozenBanana
Copy link
Author

如果换用自带的config文件:configs/LVISCOCOCOCOSTUFF_O365_OID_VGR_SA1B_REFCOCO_GQA_PhraseCut_Flickr30k/ape_deta/ape_deta_vitl_eva02_clip_vlf_lsj1024_cp_16x4_1080k_mdl.py
则是如下报错:
Traceback (most recent call last): File "/home/yenianjin/data/project/APE/demo/demo_lazy.py", line 137, in <module> demo = VisualizationDemo(cfg, args=args) File "/home/yenianjin/data/project/APE/demo/predictor_lazy.py", line 139, in __init__ "__unused_" + "_".join([d for d in cfg.dataloader.train.dataset.names]) File "/home/yenianjin/anaconda3/envs/ape/lib/python3.9/site-packages/omegaconf/listconfig.py", line 176, in __getattr__ self._format_and_raise( File "/home/yenianjin/anaconda3/envs/ape/lib/python3.9/site-packages/omegaconf/base.py", line 190, in _format_and_raise format_and_raise( File "/home/yenianjin/anaconda3/envs/ape/lib/python3.9/site-packages/omegaconf/_utils.py", line 821, in format_and_raise _raise(ex, cause) File "/home/yenianjin/anaconda3/envs/ape/lib/python3.9/site-packages/omegaconf/_utils.py", line 719, in _raise raise ex.with_traceback(sys.exc_info()[2]) # set end OC_CAUSE=1 for full backtrace omegaconf.errors.ConfigAttributeError: ListConfig does not support attribute access full_key: dataloader.train[dataset] object_type=list

@shenyunhang
Copy link
Owner

你好,我运行了以下命令,并未报错,可能是因为最后一个\要去掉。如果还是有错误,可以把错误截图贴上来看看。

python3.9 demo/demo_lazy.py \
--config-file configs/LVISCOCOCOCOSTUFF_O365_OID_VGR_SA1B_REFCOCO_GQA_PhraseCut_Flickr30k/ape_deta/ape_deta_vitl_eva02_clip_vlf_lsj1024_cp_16x4_1080k.py \
--input "./inputs/imgs_0083_339B1005-33B9-4e3c-B4A8-2D7EC23720C4_raw.jpg" \
--output "./seg_results" \
--confidence-threshold 0.1 \
--text-prompt 'car' \
--with-box \
--with-mask \
--with-sseg \
--opts \
train.init_checkpoint="./checkpoints/APE-D/model_final.pth" \
model.model_language.cache_dir="" \
model.model_vision.select_box_nums_for_evaluation=500 \
model.model_vision.text_feature_bank_reset=True \
model.model_vision.backbone.net.xattn=False \
model.model_vision.transformer.encoder.pytorch_attn=True \
model.model_vision.transformer.decoder.pytorch_attn=True

下面这个配置文件只能训练时候使用,推理时候是会报错的。

configs/LVISCOCOCOCOSTUFF_O365_OID_VGR_SA1B_REFCOCO_GQA_PhraseCut_Flickr30k/ape_deta/ape_deta_vitl_eva02_clip_vlf_lsj1024_cp_16x4_1080k_mdl.py

@CallMeFrozenBanana
Copy link
Author

感谢你的回复!
具体的,
使用第一个脚本时不会报错,会出现大量parameter loading相关的warning,以及错误的分割/检测结果(如满图大量的car或者没有结果)
WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.pos_embed' to the model due to incompatible shapes: (1, 197, 1024) in the checkpoint but (1, 442, 1024) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.rope_win.freqs_cos' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.rope_win.freqs_sin' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.0.attn.rope.freqs_cos' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.0.attn.rope.freqs_sin' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.1.attn.rope.freqs_cos' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.1.attn.rope.freqs_sin' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.2.attn.rope.freqs_cos' to the model due to incompatible shapes: (256, 64) in the checkpoint but (4096, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.2.attn.rope.freqs_sin' to the model due to incompatible shapes: (256, 64) in the checkpoint but (4096, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.3.attn.rope.freqs_cos' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.3.attn.rope.freqs_sin' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.4.attn.rope.freqs_cos' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.4.attn.rope.freqs_sin' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.6.attn.rope.freqs_cos' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.6.attn.rope.freqs_sin' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.7.attn.rope.freqs_cos' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.7.attn.rope.freqs_sin' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.8.attn.rope.freqs_cos' to the model due to incompatible shapes: (256, 64) in the checkpoint but (4096, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.8.attn.rope.freqs_sin' to the model due to incompatible shapes: (256, 64) in the checkpoint but (4096, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.9.attn.rope.freqs_cos' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.9.attn.rope.freqs_sin' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.10.attn.rope.freqs_cos' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.backbone.net.blocks.10.attn.rope.freqs_sin' to the model due to incompatible shapes: (256, 64) in the checkpoint but (1024, 64) in the model! You mig ............... WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.3.ln_2.weight' to the model due to incompatible shapes: (768,) in the checkpoint but (1280,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.3.ln_2.bias' to the model due to incompatible shapes: (768,) in the checkpoint but (1280,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.3.mlp.c_fc.weight' to the model due to incompatible shapes: (3072, 768) in the checkpoint but (5120, 1280) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.3.mlp.c_fc.bias' to the model due to incompatible shapes: (3072,) in the checkpoint but (5120,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.3.mlp.c_proj.weight' to the model due to incompatible shapes: (768, 3072) in the checkpoint but (1280, 5120) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.3.mlp.c_proj.bias' to the model due to incompatible shapes: (768,) in the checkpoint but (1280,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.4.ln_1.weight' to the model due to incompatible shapes: (768,) in the checkpoint but (1280,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.4.ln_1.bias' to the model due to incompatible shapes: (768,) in the checkpoint but (1280,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.4.attn.in_proj_weight' to the model due to incompatible shapes: (2304, 768) in the checkpoint but (3840, 1280) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.4.attn.in_proj_bias' to the model due to incompatible shapes: (2304,) in the checkpoint but (3840,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.4.attn.out_proj.weight' to the model due to incompatible shapes: (768, 768) in the checkpoint but (1280, 1280) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.4.attn.out_proj.bias' to the model due to incompatible shapes: (768,) in the checkpoint but (1280,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.4.ln_2.weight' to the model due to incompatible shapes: (768,) in the checkpoint but (1280,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.4.ln_2.bias' to the model due to incompatible shapes: (768,) in the checkpoint but (1280,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.4.mlp.c_fc.weight' to the model due to incompatible shapes: (3072, 768) in the checkpoint but (5120, 1280) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.4.mlp.c_fc.bias' to the model due to incompatible shapes: (3072,) in the checkpoint but (5120,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.4.mlp.c_proj.weight' to the model due to incompatible shapes: (768, 3072) in the checkpoint but (1280, 5120) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.4.mlp.c_proj.bias' to the model due to incompatible shapes: (768,) in the checkpoint but (1280,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.5.ln_1.weight' to the model due to incompatible shapes: (768,) in the checkpoint but (1280,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.5.ln_1.bias' to the model due to incompatible shapes: (768,) in the checkpoint but (1280,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.5.attn.in_proj_weight' to the model due to incompatible shapes: (2304, 768) in the checkpoint but (3840, 1280) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.5.attn.in_proj_bias' to the model due to incompatible shapes: (2304,) in the checkpoint but (3840,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.5.attn.out_proj.weight' to the model due to incompatible shapes: (768, 768) in the checkpoint but (1280, 1280) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.5.attn.out_proj.bias' to the model due to incompatible shapes: (768,) in the checkpoint but (1280,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.5.ln_2.weight' to the model due to incompatible shapes: (768,) in the checkpoint but (1280,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.5.ln_2.bias' to the model due to incompatible shapes: (768,) in the checkpoint but (1280,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.5.mlp.c_fc.weight' to the model due to incompatible shapes: (3072, 768) in the checkpoint but (5120, 1280) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.5.mlp.c_fc.bias' to the model due to incompatible shapes: (3072,) in the checkpoint but (5120,) in the model! You might want to double check if this is expected. WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Skip loading parameter 'model_vision.model_language.net.text.transformer.resblocks.5.mlp.c_proj.weight' to the model due to incompatible shapes: (768, 3072) in the checkpoint but (1280, 5120) in the model! You might want to double check if this is expected. ........................... WARNING [01/17 15:20:01 fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint: model_vision.backbone.net.blocks.0.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.0.attn.rope.{freqs_cos, freqs_sin} model_vision.backbone.net.blocks.1.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.1.attn.rope.{freqs_cos, freqs_sin} model_vision.backbone.net.blocks.10.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.10.attn.rope.{freqs_cos, freqs_sin} model_vision.backbone.net.blocks.11.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.12.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.12.attn.rope.{freqs_cos, freqs_sin} model_vision.backbone.net.blocks.13.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.13.attn.rope.{freqs_cos, freqs_sin} model_vision.backbone.net.blocks.14.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.15.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.15.attn.rope.{freqs_cos, freqs_sin} model_vision.backbone.net.blocks.16.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.16.attn.rope.{freqs_cos, freqs_sin} model_vision.backbone.net.blocks.17.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.18.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.18.attn.rope.{freqs_cos, freqs_sin} model_vision.backbone.net.blocks.19.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.19.attn.rope.{freqs_cos, freqs_sin} model_vision.backbone.net.blocks.2.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.20.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.21.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.21.attn.rope.{freqs_cos, freqs_sin} model_vision.backbone.net.blocks.22.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.22.attn.rope.{freqs_cos, freqs_sin} model_vision.backbone.net.blocks.23.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.3.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.3.attn.rope.{freqs_cos, freqs_sin} model_vision.backbone.net.blocks.4.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.4.attn.rope.{freqs_cos, freqs_sin} model_vision.backbone.net.blocks.5.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.6.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.6.attn.rope.{freqs_cos, freqs_sin} model_vision.backbone.net.blocks.7.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.7.attn.rope.{freqs_cos, freqs_sin} model_vision.backbone.net.blocks.8.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.9.attn.inner_attn_ln.{bias, weight} model_vision.backbone.net.blocks.9.attn.rope.{freqs_cos, freqs_sin} model_vision.backbone.net.pos_embed model_vision.backbone.net.rope_win.{freqs_cos, freqs_sin} model_vision.criterion.0.fed_loss_pad_classes model_vision.criterion.2.fed_loss_cls_weights model_vision.model_language.net.logit_scale model_vision.model_language.net.text.ln_final.{bias, weight} model_vision.model_language.net.text.token_embedding.weight model_vision.model_language.net.text.transformer.resblocks.0.attn.out_proj.{bias, weight} model_vision.model_language.net.text.transformer.resblocks.0.attn.{in_proj_bias, in_proj_weight} model_vision.model_language.net.text.transformer.resblocks.0.ln_1.{bias, weight} ................ model_vision.transformer.encoder.vl_layers.3.b_attn.layer_norm_l.{bias, weight} [93/1923]model_vision.transformer.encoder.vl_layers.3.b_attn.layer_norm_v.{bias, weight} model_vision.transformer.encoder.vl_layers.3.b_attn.{gamma_l, gamma_v} model_vision.transformer.encoder.vl_layers.4.b_attn.attn.l_proj.{bias, weight} model_vision.transformer.encoder.vl_layers.4.b_attn.attn.out_l_proj.{bias, weight} model_vision.transformer.encoder.vl_layers.4.b_attn.attn.out_v_proj.{bias, weight} model_vision.transformer.encoder.vl_layers.4.b_attn.attn.v_proj.{bias, weight} model_vision.transformer.encoder.vl_layers.4.b_attn.attn.values_l_proj.{bias, weight} model_vision.transformer.encoder.vl_layers.4.b_attn.attn.values_v_proj.{bias, weight} model_vision.transformer.encoder.vl_layers.4.b_attn.layer_norm_l.{bias, weight} model_vision.transformer.encoder.vl_layers.4.b_attn.layer_norm_v.{bias, weight} model_vision.transformer.encoder.vl_layers.4.b_attn.{gamma_l, gamma_v} model_vision.transformer.encoder.vl_layers.5.b_attn.attn.l_proj.{bias, weight} model_vision.transformer.encoder.vl_layers.5.b_attn.attn.out_l_proj.{bias, weight} model_vision.transformer.encoder.vl_layers.5.b_attn.attn.out_v_proj.{bias, weight} model_vision.transformer.encoder.vl_layers.5.b_attn.attn.v_proj.{bias, weight} model_vision.transformer.encoder.vl_layers.5.b_attn.attn.values_l_proj.{bias, weight} model_vision.transformer.encoder.vl_layers.5.b_attn.attn.values_v_proj.{bias, weight} model_vision.transformer.encoder.vl_layers.5.b_attn.layer_norm_l.{bias, weight} model_vision.transformer.encoder.vl_layers.5.b_attn.layer_norm_v.{bias, weight} model_vision.transformer.encoder.vl_layers.5.b_attn.{gamma_l, gamma_v} WARNING [01/17 15:20:01 fvcore.common.checkpoint]: The checkpoint state_dict contains keys that are not used by the model: model_language.net.text.{logit_scale, positional_embedding, text_projection} model_language.net.text.transformer.resblocks.0.ln_1.{bias, weight} model_language.net.text.transformer.resblocks.0.attn.{in_proj_bias, in_proj_weight} model_language.net.text.transformer.resblocks.0.attn.out_proj.{bias, weight} model_language.net.text.transformer.resblocks.0.ln_2.{bias, weight} model_language.net.text.transformer.resblocks.0.mlp.c_fc.{bias, weight} model_language.net.text.transformer.resblocks.0.mlp.c_proj.{bias, weight} model_language.net.text.transformer.resblocks.1.ln_1.{bias, weight} model_language.net.text.transformer.resblocks.1.attn.{in_proj_bias, in_proj_weight} model_language.net.text.transformer.resblocks.1.attn.out_proj.{bias, weight} model_language.net.text.transformer.resblocks.1.ln_2.{bias, weight} model_language.net.text.transformer.resblocks.1.mlp.c_fc.{bias, weight} model_language.net.text.transformer.resblocks.1.mlp.c_proj.{bias, weight} model_language.net.text.transformer.resblocks.2.ln_1.{bias, weight} model_language.net.text.transformer.resblocks.2.attn.{in_proj_bias, in_proj_weight} model_language.net.text.transformer.resblocks.2.attn.out_proj.{bias, weight} model_language.net.text.transformer.resblocks.2.ln_2.{bias, weight} model_language.net.text.transformer.resblocks.2.mlp.c_fc.{bias, weight} model_language.net.text.transformer.resblocks.2.mlp.c_proj.{bias, weight} model_language.net.text.transformer.resblocks.3.ln_1.{bias, weight} model_language.net.text.transformer.resblocks.3.attn.{in_proj_bias, in_proj_weight} model_language.net.text.transformer.resblocks.3.attn.out_proj.{bias, weight} model_language.net.text.transformer.resblocks.3.ln_2.{bias, weight} model_language.net.text.transformer.resblocks.3.mlp.c_fc.{bias, weight} model_language.net.text.transformer.resblocks.3.mlp.c_proj.{bias, weight} model_language.net.text.transformer.resblocks.4.ln_1.{bias, weight} model_language.net.text.transformer.resblocks.4.attn.{in_proj_bias, in_proj_weight} model_language.net.text.transformer.resblocks.4.attn.out_proj.{bias, weight} model_language.net.text.transformer.resblocks.4.ln_2.{bias, weight} model_language.net.text.transformer.resblocks.4.mlp.c_fc.{bias, weight} model_language.net.text.transformer.resblocks.4.mlp.c_proj.{bias, weight} model_language.net.text.transformer.resblocks.5.ln_1.{bias, weight} model_language.net.text.transformer.resblocks.5.attn.{in_proj_bias, in_proj_weight} model_language.net.text.transformer.resblocks.5.attn.out_proj.{bias, weight} model_language.net.text.transformer.resblocks.5.ln_2.{bias, weight} model_language.net.text.transformer.resblocks.5.mlp.c_fc.{bias, weight} model_language.net.text.transformer.resblocks.5.mlp.c_proj.{bias, weight} model_language.net.text.transformer.resblocks.6.ln_1.{bias, weight} model_language.net.text.transformer.resblocks.6.attn.{in_proj_bias, in_proj_weight} model_language.net.text.transformer.resblocks.6.attn.out_proj.{bias, weight} ....
warning很长,就没有截取全了

@shenyunhang
Copy link
Owner

看样子应该是模型的权重文件有问题。
确保下载的权重是APE-D的。
APE不同版本的配置文件和权重是对应的。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants