Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exception after training on Docker image e88040337fa3 #8764

Closed
1 of 2 tasks
sstainba opened this issue Jul 28, 2022 · 10 comments · Fixed by #9250
Closed
1 of 2 tasks

Exception after training on Docker image e88040337fa3 #8764

sstainba opened this issue Jul 28, 2022 · 10 comments · Fixed by #9250
Labels
bug Something isn't working

Comments

@sstainba
Copy link

Search before asking

  • I have searched the YOLOv5 issues and found no similar bug report.

YOLOv5 Component

Training

Bug

After training two different custom models, both runs end with several exceptions thrown:

wandb: Synced 5 W&B file(s), 111 media file(s), 1 artifact file(s) and 0 other file(s)
wandb: Find logs at: ./wandb/run-20220728_115228-25fqw5ps/logs
Exception ignored in: <function StorageWeakRef.__del__ at 0x7fc14e083040>
Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/reductions.py", line 38, in __del__
  File "/opt/conda/lib/python3.8/site-packages/torch/storage.py", line 636, in _free_weak_ref
AttributeError: 'NoneType' object has no attribute '_free_weak_ref'

This is exception is repeated about 20 times.

Environment

Docker image e88040337fa3
Windows 11 Host using WSL2
Nvidia 3060 12GB GPU

Minimal Reproducible Example

Doesn't appear to be specific to my work. This happened with two different data sets.

Dataset 1: 300 images 1280 x1024, 1 Tag/Class
Command: train.py --rect --imgsz 1280 --img 1024 --epochs 500 --name test1 --cache --batch 4 --data /usr/src/datasets/data.yaml --weights yolov5m6.pt

Dataset 2: 200 images 416 x 416, 1 Tag/Class
Command: train.py --img 416 --epochs 500 --name splash --cache --batch 8 --data /usr/src/datasets/splash/data.yaml --weights yolov5s.pt

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@sstainba sstainba added the bug Something isn't working label Jul 28, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Jul 28, 2022

👋 Hello @sstainba, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email [email protected].

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

@sstainba this is a pytorch issue that should be resolved in later versions of torch. You can safely ignore it.

@github-actions
Copy link
Contributor

github-actions bot commented Aug 29, 2022

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Aug 29, 2022
@CanisLupus518
Copy link

CanisLupus518 commented Aug 31, 2022

@glenn-jocher I am curious what the pytorch issue is.

After running into this myself and digging into this a bit more, I found documentation from nVidia related to WSL2 limitations: here. Basically, in this scenario, uses of pin_memory=True are very limited. This seems more like an nVidia\WSL2 issue than a pytorch issue.
This issue exists when running directly in WSL2 or running in Docker+WSL2.

For yolov5, I was able successfully train in Docker+WSL2+CUDA only after changing pin_memory to False in utils/dataloaders.py

I wonder if you might consider exposing pin_memory as an argument to train, val, detect, etc. to allow for greater flexibility.

@github-actions github-actions bot removed the Stale Stale and schedule for closing soon label Sep 1, 2022
@glenn-jocher
Copy link
Member

@CanisLupus518 got it, thanks for the info! I'll take a look at simply setting this to False. Last time I profiled it did provide benefits on later GPUs like T4 and V100. What CUDA device are you using?

@glenn-jocher
Copy link
Member

@CanisLupus518 could we conditionally pin memory depending no the CUDA version or device properties?
Screenshot 2022-09-01 at 11 19 30

@glenn-jocher
Copy link
Member

@CanisLupus518 I profiled 3 epochs of VOC, got .121 hours with pin_memory=True and .126 hours with pin_memory=False. Epoch times were about 1:51 True and 1:56 false, so I think we want to enable by default, but I'll add an environment variable to allow setting.

@glenn-jocher glenn-jocher linked a pull request Sep 1, 2022 that will close this issue
@glenn-jocher
Copy link
Member

glenn-jocher commented Sep 1, 2022

@CanisLupus518 good news 😃! Your original issue may now be fixed ✅ in PR #9250. This PR allows the user to disable pin_memory during training/val using a new PIN_MEMORY environment variable, i.e.

PIN_MEMORY=False python train.py 

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@CanisLupus518
Copy link

@glenn-jocher wow. Thank you so much. To answer your question, I’m using an RTX 3070 Ti.

@glenn-jocher
Copy link
Member

@CanisLupus518 you're welcome! 😀 Excellent choice in GPUs, the RTX 3070 Ti is a powerhouse. Let us know if you encounter any other issues or have further suggestions. Happy training with YOLOv5 and the RTX 3070 Ti 🚀!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants