Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AOLM模块的报错 #44

Open
895318 opened this issue May 16, 2024 · 0 comments
Open

AOLM模块的报错 #44

895318 opened this issue May 16, 2024 · 0 comments

Comments

@895318
Copy link

895318 commented May 16, 2024

作者你好,我将AOLM模块加入到我的模型以后会出现一下错误,当我从双卡GPU切换到单卡GPU依然会出现这个问题。但奇怪的是,我曾经在相同参数的情况下完成过一次完整的训练,然而再次训练时就会报出ValueError: max() arg is an empty sequence的错误。
Traceback (most recent call last):
File "train.py", line 33, in
do_train(
File "/root/autodl-tmp/PART-master/processor/processor.py", line 102, in do_train
cls_g, cls_1 = model(img, target,mode='train') #0515
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/root/miniconda3/lib/python3.8/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
ValueError: Caught ValueError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/root/autodl-tmp/PART-master/model/make_model.py", line 321, in forward
return self.forward_multi(inputs, label)
File "/root/autodl-tmp/PART-master/model/make_model.py", line 406, in forward_multi
coordinates = torch.tensor(AOLM(out.detach()))
File "/root/autodl-tmp/PART-master/model/make_model.py", line 33, in AOLM
max_idx = areas.index(max(areas))
ValueError: max() arg is an empty sequence

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant