Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

infer issue: 'NoneType' object has no attribute 'memory_efficient_attention' #51

Open
wdc233 opened this issue Jun 6, 2024 · 6 comments

Comments

@wdc233
Copy link

wdc233 commented Jun 6, 2024

Namespace(confidence_threshold=0.2, config_file='configs/LVISCOCOCOCOSTUFF_O365_OID_VGR_SA1B_REFCOCO_GQA_PhraseCut_Flickr30k/ape_deta/ape_deta_vitl_eva02_clip_vlf_lsj1024_cp_16x4_1080k.py', input=['/6THardDisk/wendongcheng/SemiCD/CDData/WHU-CD-256/A/whucd_00020.png'], opts=['train.init_checkpoint=checkpoints/model_final.pth', 'model.model_vision.select_box_nums_for_evaluation=500', 'model.model_vision.text_feature_bank_reset=True', 'model.model_language.cache_dir='], output='APE_output/whu-cd_pseudo-label_ape_prob/A/', text_prompt='house,building,road,grass,tree,water', video_input=None, webcam=False, with_box=False, with_mask=False, with_sseg=True)
0%| | 0/1 [00:01<?, ?it/s]
Traceback (most recent call last):
File "demo/demo_lazy.py", line 150, in
predictions, visualized_output, visualized_outputs, metadata = demo.run_on_image(
File "/6THardDisk/wendongcheng/DiffMatch/APE/demo/predictor_lazy.py", line 209, in run_on_image
predictions = self.predictor(image, text_prompt, mask_prompt)
File "/6THardDisk/wendongcheng/DiffMatch/APE/ape/engine/defaults.py", line 229, in call
predictions = self.model([inputs])[0]
File "/6THardDisk/wendongcheng/anaconda3-envs/envs/APECD/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/6THardDisk/wendongcheng/DiffMatch/APE/ape/modeling/ape_deta/ape_deta.py", line 36, in forward
losses = self.model_vision(batched_inputs, do_postprocess=do_postprocess)
File "/6THardDisk/wendongcheng/anaconda3-envs/envs/APECD/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/6THardDisk/wendongcheng/DiffMatch/APE/ape/modeling/ape_deta/deformable_detr_segm_vl.py", line 372, in forward
features = self.backbone(images.tensor) # output feature dict
File "/6THardDisk/wendongcheng/anaconda3-envs/envs/APECD/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/6THardDisk/wendongcheng/DiffMatch/APE/ape/modeling/backbone/vit_eva_clip.py", line 883, in forward
bottom_up_features = self.net(x)
File "/6THardDisk/wendongcheng/anaconda3-envs/envs/APECD/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/6THardDisk/wendongcheng/DiffMatch/APE/ape/modeling/backbone/vit_eva_clip.py", line 751, in forward
x = blk(x)
File "/6THardDisk/wendongcheng/anaconda3-envs/envs/APECD/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/6THardDisk/wendongcheng/anaconda3-envs/envs/APECD/lib/python3.8/site-packages/fairscale/nn/checkpoint/checkpoint_activations.py", line 171, in _checkpointed_forward
return original_forward(module, *args, **kwargs)
File "/6THardDisk/wendongcheng/DiffMatch/APE/ape/modeling/backbone/vit_eva_clip.py", line 514, in forward
x = self.attn(x, rel_pos_bias=rel_pos_bias, attn_mask=attn_mask)
File "/6THardDisk/wendongcheng/anaconda3-envs/envs/APECD/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/6THardDisk/wendongcheng/DiffMatch/APE/ape/modeling/backbone/vit_eva_clip.py", line 274, in forward
x = xops.memory_efficient_attention(
AttributeError: 'NoneType' object has no attribute 'memory_efficient_attention'
请问这是由于什么原因呢?

@wdc233
Copy link
Author

wdc233 commented Jun 6, 2024

apex.normalization.FusedLayerNorm not found, will use pytorch implementations
Please 'pip install xformers'
apex.normalization.FusedLayerNorm not found, will use pytorch implementations
======== shape of rope freq torch.Size([1024, 64]) ========
======== shape of rope freq torch.Size([4096, 64]) ========
貌似提示安装xformer,但是一安装就升级torch版本,又和之前别的版本冲突了,压根不行。我想问你用的什么版本呢,我其他都和你一样啊,但到这就不行了

@XuYunqiu
Copy link

apex.normalization.FusedLayerNorm not found, will use pytorch implementations Please 'pip install xformers' apex.normalization.FusedLayerNorm not found, will use pytorch implementations ======== shape of rope freq torch.Size([1024, 64]) ======== ======== shape of rope freq torch.Size([4096, 64]) ======== 貌似提示安装xformer,但是一安装就升级torch版本,又和之前别的版本冲突了,压根不行。我想问你用的什么版本呢,我其他都和你一样啊,但到这就不行了

Hi, I just meet the same issue.
May I ask have you resolved this issue?

@wdc233
Copy link
Author

wdc233 commented Jun 11, 2024

apex.normalization.FusedLayerNorm not found, will use pytorch implementations Please 'pip install xformers' apex.normalization.FusedLayerNorm not found, will use pytorch implementations ======== shape of rope freq torch.Size([1024, 64]) ======== ======== shape of rope freq torch.Size([4096, 64]) ======== 貌似提示安装xformer,但是一安装就升级torch版本,又和之前别的版本冲突了,压根不行。我想问你用的什么版本呢,我其他都和你一样啊,但到这就不行了

Hi, I just meet the same issue. May I ask have you resolved this issue?

没有解决,但这个问题是因为xformer包,我看别人是直接使用docker镜像来解决环境问题的,docker pull keyk13/ape_cu118:v1你试试这个镜像是否可以

@XuYunqiu
Copy link

没有解决,但这个问题是因为xformer包,我看别人是直接使用docker镜像来解决环境问题的,docker pull keyk13/ape_cu118:v1你试试这个镜像是否可以

谢谢 我把xformer关掉了 就可以跑了
model.model_vision.backbone.net.xattn=False

@wdc233
Copy link
Author

wdc233 commented Jun 13, 2024

没有解决,但这个问题是因为xformer包,我看别人是直接使用docker镜像来解决环境问题的,docker pull keyk13/ape_cu118:v1你试试这个镜像是否可以

谢谢 我把xformer关掉了 就可以跑了 model.model_vision.backbone.net.xattn=False

关闭这个之后对推理性能有影响吗你那儿看着

@XuYunqiu
Copy link

关闭这个之后对推理性能有影响吗你那儿看着

benchmark上的指标不是很清楚 就跑几张自己网上找的图 看着还行

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants