You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
作者你好,我将AOLM模块加入到我的模型以后会出现一下错误,当我从双卡GPU切换到单卡GPU依然会出现这个问题。但奇怪的是,我曾经在相同参数的情况下完成过一次完整的训练,然而再次训练时就会报出ValueError: max() arg is an empty sequence的错误。
Traceback (most recent call last):
File "train.py", line 33, in
do_train(
File "/root/autodl-tmp/PART-master/processor/processor.py", line 102, in do_train
cls_g, cls_1 = model(img, target,mode='train') #0515
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/root/miniconda3/lib/python3.8/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
ValueError: Caught ValueError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/root/autodl-tmp/PART-master/model/make_model.py", line 321, in forward
return self.forward_multi(inputs, label)
File "/root/autodl-tmp/PART-master/model/make_model.py", line 406, in forward_multi
coordinates = torch.tensor(AOLM(out.detach()))
File "/root/autodl-tmp/PART-master/model/make_model.py", line 33, in AOLM
max_idx = areas.index(max(areas))
ValueError: max() arg is an empty sequence
The text was updated successfully, but these errors were encountered:
作者你好,我将AOLM模块加入到我的模型以后会出现一下错误,当我从双卡GPU切换到单卡GPU依然会出现这个问题。但奇怪的是,我曾经在相同参数的情况下完成过一次完整的训练,然而再次训练时就会报出ValueError: max() arg is an empty sequence的错误。
Traceback (most recent call last):
File "train.py", line 33, in
do_train(
File "/root/autodl-tmp/PART-master/processor/processor.py", line 102, in do_train
cls_g, cls_1 = model(img, target,mode='train') #0515
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/root/miniconda3/lib/python3.8/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
ValueError: Caught ValueError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/root/autodl-tmp/PART-master/model/make_model.py", line 321, in forward
return self.forward_multi(inputs, label)
File "/root/autodl-tmp/PART-master/model/make_model.py", line 406, in forward_multi
coordinates = torch.tensor(AOLM(out.detach()))
File "/root/autodl-tmp/PART-master/model/make_model.py", line 33, in AOLM
max_idx = areas.index(max(areas))
ValueError: max() arg is an empty sequence
The text was updated successfully, but these errors were encountered: